This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.
This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.
I occasionally have dreams in which I am playing an RTS videogame like Starcraft. In these, I am a disembodied entity seeing the world only as it might be displayed in such a game. During those dreams, this feels natural and unsurprising and I don't put a second thought to the matter. In fact, I've been having these dreams for a while now and only just recently noticed that odd fact that it's not me sitting at a computer playing the game, it's just the game being the only thing in the world at all.
Do other people have dreams in which they are not human-shaped or otherwise experience from a perspective that is very different from real life?
Is LSD like a thing?
Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.
My greatest concern would be that I would find the results of a trip irreducibly spiritual, or some other nonsense. That I would end up sacrificing a lot of epistemic rationality for some of the instrumental variety, or perhaps a loss of both in favor of living off of some big, new, and imaginary life changing experience.
In short, I'm comfortable with recent life changes and recent introspection, and I wonder whether I should expect a trip to reinforce and categorize those positive experiences, or else replace them with something farce.
Also I should ask about any other health dangers, or even other non-obvious benefits.
One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.
When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".
I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.
(Created an alternative username for replying to this because I don't want to associate my LSD use with my real name.)
I'd just like to add a contrary datapoint - I had a one pretty intense trip that you might describe as "fucking weird", which was certainly mind-blowing in a sense. My sense of time transformed stopped being linear and started feeling like it was a labyrinth that I could walk in, I alternatively perceived the other people in the room as being real separate people or as parts of my own subconsciousness, and at one point it felt like my unity of consciousness shattered into a thousand different strands of thought which I could perceive as complex geometric visualizations...
But afterwards, it didn't particularly feel like I'd have learned anything. It was a weird and cool experience, but that was it. You say that one's worldview won't be the same after coming down, but I don't feel like the trip changed anything. At most it might've given me some mildly interesting hypotheses about the way the brain might work.
I'm guessing that the main reason for this might be that I already thought of my reality as being essentially constructed by my brain. Tripping did confirm that a bit, but then I never had serious doubts about it in the first place.
The subject basically pretends that everything that hypnotist says is true. Have you ever played a video game and got so wrapped up in the virtual world you just stopped noticing the real world? That's called immersion, and it's achieved by keeping your attention focused on the game. When your attention drifts away from the game, you start noticing that it's 2 am or that's you've been playing for four hours, and you remember that you are not in the video game, you're just playing a video game. But as long as your attention remains on game, your feel like you are actually living in the video game's world. Gamers love the feeling of immersion, so developers put a lot of work into figuring out how to keep gamers attention, which maintains the immersion.
Hypnosis works on the same principle. The hypnotist uses the patients full attention to create an imaginary world that feels real to the patient. The difference between video games and hypnosis is hypnosis patients actively give their attention to the hypnotist, while gamers passively expect the game to take their attention. When a hypnotic induction starts, the subject is asked to imagine the something in great detail, effecitvely putt...
No it does not. Aspirin reduces the risk of heart attacks and strokes but also causes adverse outcomes - most importantly by raising the risk of gastro-intestinal bleeds. For the typical person in their mid twenties the risk of a heart attack or stroke is so low that the benefit of aspirin will be almost nil, the absolute value of intervening will be vanishingly small even though the proportional decrease in risk stays the same.
There are many possible effects of taking low dose aspirin other than those described so far - it may reduce the risk of colon cancer, for instance, but there are so many possible adverse outcomes too. Cyclooxygenase - the enzyme targeted by aspirin - is involved in many housekeeping functions throughout the body in particular the kidneys, stomach and possibly erectile tissue.
Studies examining risk versus benefit for low dose aspirin treatment have found that a cardiovascular risk of about 1.5%/year is necessary for the benefits of aspirin to outweigh the ill effects. Whilst no studies have been conducted on healthy young individuals I don't think such studies should be conducted, given that studies in those at a much higher cardiovascular risk than someone ...
How do you cure "something is wrong on the Internet" syndrome? It bugs me when people have political opinions that are simplistic and self-congratulating, but I've found that arguing with them wastes time and energy and rarely persuades them.
Really think about how very much is wrong on the internet compared to your capacity to try to correct it. I think this might be a case of cultivating scope sensitivity.
Or (which is what I think I do) combine that with a sense that giving a little shove towards correctness is a public service, but it isn't a strong obligation. This tones the compulsion down to a very moderate hobby.
I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person's quality of life can be quantified--i.e., that person's "utility"--and these utilities can be aggregated. Under preference utilitarianism, a person's utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone's utility function has the same weight when th...
"Consequentialism" is too broad, "utilitarianism" is too narrow, and "VNM rationality" is too clumsy and not generally thought of as a school of ethical thought.
What fiction should I read first?
I have read pretty much nothing but MoR and books I didn't like for school, so I don't really know what my preferences are. I am a mathematician and a Bayesianist with an emphasis on the more theoretical side of rationality. I like smart characters that win. I looked at some recommendations on other topics, but there are too many options. If you suggest more than one, please describe a decision procedure that uses information that I have and you don't to narrow it down.
Update: I decided on Permutation City, and was unable to put it down until it was done. I am very happy with the book. I am a lot more convinced now that I will eventually read almost all of these, so the order doesn't matter as much.
Terry Pratchett's Discworld series. I recommend starting with Mort (the fourth book published). The first two books are straight-up parodies of fantasy cliches that are significantly different from what comes afterward, and the third book, Equal Rites, I didn't care for very much. Pratchett said that Mort was when he discovered plot, and it's the book that I recommend to everyone.
Hi, I'm new here and have some questons regarding editing and posting. I read thru http://wiki.lesswrong.com/wiki/Help:User_Guide and http://wiki.lesswrong.com/wiki/FAQ but couldn't find the answers there so I decided to ask here. Probably I overlooked something obvious and a link will suffice.
How do I add follow up links to a post? Most main and sequences posts have them but I'm unable to add them to my post. Note: I posted in Discussion as recommended because these were my first posts. I didn't any feedback to change that but I'd nonetheless cross-lin
Can someone explain the payoff of a many worlds theory? What it's supposed to buy you?
People talk like it somehow avoids the issue of wave function collapse, but I just see many different collapsed function in different timelines.
Is the Fun Theory Sequence literally meant to answer "How much fun is there in the universe?", or is it more intended to set a lower bound on that figure? Personally I'm hoping that once I become a superintelligence, I'll have access to currently unimaginable forms of fun, ones that are vastly more efficient (i.e., much more fun per unit of resource consumed) than what the Fun Theory Sequence suggests. Do other people think this is implausible?
Suppose that energy were not conserved. Can we, in that case, construct a physics so that knowledge of initial conditions plus dynamics is not sufficient to predict future states? (Here 'future states' should be understood as including the full decoherent wave-function; I don't care about the "probabilistic uncertainty" in collapse interpretations of QM.) If so, is libertarian free will possible in such a universe? Are there any conservation laws that could be "knocked out" without giving rise to such a physics; or conversely, if conservation of energy is not enough, what is the minimum necessary set?
Conservation of energy can be derived from Lagrangian mechanics from the assumption that the Lagrangian is constant over time. That is equivalent to saying that the dynamics of the system do not change over time. If the mechanics are changing over time, it would certainly be more difficult to predict future states, and one could imagine the mechanics changing unpredictably over time, in which case future states could be unpredictable as well. But now we don't just have physics that changes in time, we have physics that changes randomly.
I think I find that thought more troubling than the lack of free will.
(I know of no reason why any further conservation laws would break in a universe such as that, so long as you maintain symmetry under translations, rotations, CPT, etc. Time-dependent Lagrangians are not exotic. For example, a physicist might construct a Lagrangian of a system and include a time-changing component that is determined by something outside of the system, like say a harmonic oscillator being driven by an external power source.)
Every now and then, there are discussions or comments on LW where people talk about finding a "correct" morality, or where they argue that some particular morality is "mistaken". (Two recent examples: [1] [2]) Now I would understand that in an FAI context, where we want to find such a specification for an AI that it won't do something that all humans would find terrible, but that's generally not the context of those discussions. Outside such a context, it sounds like people were presuming the existence of an objective morality, but I th...
How does muscle effort convert into force/Joules applied? What are the specs of muscles? An example of "specs" would be:
Muscle:
0<=battery<=100
Each second: increase battery by one if possible
At will: Decrease battery by one to apply one newton for one second
I am wondering because I was trying to optimize things like my morning bike ride across the park, questions like whether I should try to maximize my speed for the times when I'm going uphill, so gravity doesn't pull me backward for so long; or whether it is an inefficient move to wa...
Is reading fiction ever instrumentally useful (for a non-writer) compared to reading more informative literature? How has it been useful to you?
How long does it take other to write a typical LW post or comment?
I perceive myself as a very slow writer, but I might just have unrealistic expectations.
Is there any reason we don't include a risk aversion factor in expected utility calculations?
If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?
A significant amount of discussion on Less Wrong appears to be of the following form:
1: How do we make a superintelligent AI perform more as we want it to, without reducing it to a paperweight?
Note: reducing it to a paperweight is the periodically referenced "Put the superintelligence in a box and then delete it if it sends any output outside the box." school of AI Safety.
Something really obvious occurred to me, and it seems so basic that there has to be an answer somewhere, but I don't know what to look under. What if we try flipping the questio...
Does the unpredictability of quantum events produce a butterfly effect on the macro level? i.e., since we can't predict the result of a quantum process, and our brains are composed of eleventy zillion quantum processes, does that make our brains' output inherently unpredictable as well? Or do the quantum effects somehow cancel out? It seems to me that they must cancel out in at least some circumstances or we wouldn't have things like predictable ball collisions, spring behavior, etc.
If there is a butterfly effect, wouldn't that have something to say about Omega problems (where the predictability of the brain is a given) and some of the nastier kinds of AI basilisks?
I don't believe I am explaining MWI instead of arguing against it... whatever has this site done to me? Anyway, grossly simplified, you can think of the matter as being conserved because the "total" mass is the sum of masses in all worlds weighted by the probability of each world. So, if you had, say, 1kg of matter before a "50/50 split", you still have 1kg = 0.5*1kg+0.5*1kg after. But, since each of the two of you after the split has no access to the other world, this 50% prior probability is 100% posterior probability.
Also note that there is no universal law of conservation of matter (or even energy) to begin with, not even in a single universe. It's just an approximation given certain assumptions, like time-independence of the laws describing the system of interest.
Requesting advice on a very minor and irrelevant ethical question that's relevant to some fiction I'm writing.
The character involved has the power to "reset" the universe, changing it to a universe identical to some previous time, except that the character himself (if he's still there- if he isn't he's killed himself) retains all his memories as they were rather than them changing.
Primarily, I'm thinking through the ethical implications here. I'm not good with this sort of thing, so could somebody talk me through the implications if the character follows Lesswrong ethics?
When is self denial useful in altering your desires, vs satisfying them so you can devote time to other things?
Another stupid and mostly trivial computer question: When I go into or out of "fullscreen mode" when watching a video, the screen goes completely black for five seconds. (I timed it.) This is annoying. Any advice?
My stupid questions are these: Why are you not a nihilist? What is the refutation of nihilism, in a universe made of atoms and the void? If there is none, why have the philosophers not all been fired and philosophy abolished?
In a universe made of atoms and the void, how could it be the one true objective morality to be gloomy and dress in black?
Why are you not a nihilist?
For the same reason why I don't just lie down and stop doing anything at all. Knowledge of the fact that there isn't any ultimate meaning doesn't change the fact that there exist things which I find enjoyable and valuable. The part of my brain that primarily finds things interesting and valuable isn't wired to make its decisions based on that kind of abstract knowledge.
Why are you even reading this comment? :-)
What is the refutation of nihilism, in a universe made of atoms and the void?
"Sure, there is no ultimate purpose, but so what? I don't need an ultimate purpose to find things enjoyable."
why have the philosophers not all been fired and philosophy abolished?
Philosophy is the study of interesting questions, and nihilism hasn't succeeded in making things uninteresting.
I've seen a quoted piece of literature in the commentssection, but instead of the original letters, they all seemed to be replaced by others. I think i remember seeing this more than once, and I still have no idea why that should in any way be like that is
Short of hearing about it in the news, how does one find out whether a financial institution should be eligible to be the keeper of one's money? (I am specifically referring to ethical practices, not whether one could get a better interest rate elsewhere.)
What happens after a FAI is built? There's a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is "safe" and to put our trust in it?
Dear Less Wrong,
I occasionally go through existential crises that involve questions that normally seem obvious, but which seem much more perplexing when experiencing these existential crises. I'm curious then what the answers to these questions would be from the perspective of a rationalist well versed in the ideas put forth in the Less Wrong community. Questions such as:
What is the meaning of life?
If meaning is subjective, does that mean there is no objective meaning to life?
Why should I exist? Or why should I not exist?
Why should I obey my genetic pro...
I've heard that people often give up on solving problems sooner than they should. Does this apply to all types of problems?
In particular, I'm curious about personal problems such as becoming happier (since "hard problems" seems to refer more to scientific research and building things around here), and trying to solve any sort of problem on another person's behalf (I suspect social instincts would make giving up on a single other person's problem harder than giving up on general problems or one's own problems).
A stupid question: in all the active discussions about (U)FAI I see a lot of talk about goals. I see no one talking about constraints. Why is that?
If you think that you can't make constraints "stick" in a self-modifying AI, you shouldn't be able to make a goal hierarchy "stick" as well. If you assume that we CAN program in an inviolable set of goals I don't see why we can't program in an inviolable set of constraints as well.
And yet this idea is obvious and trivial -- so what's wrong with it?
When my computer boots up, I usually get the following error message:
BIOS has detected unsuccessful POST attempt(s).
Possible causes include recent changes to BIOS
Performance Options or recent hardware change.
Press 'Y' to enter Setup or 'N' to cancel and attempt
to boot with previous settings.
If I press Y, the computer enters Setup. I then "Exit Discarding Changes" and the computer finishes booting. If I press N, the computer tries to boot from the beginning and gives me the same message. It's somewhat annoying to have to go into the BIOS every time I want to reboot my computer - does anyone have any idea what's causing this or how to fix it?
Set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page. (Thanks to gwern for informing me about this.)
Thank you. I'm just creating http://wiki.lesswrong.com/mediawiki/index.php?title=User:Gunnar_Zarncke and hope that it will get linked to http://lesswrong.com/user/Gunnar_Zarncke/
Halt. I have a problem here: Saving doesn't seem to work. The page stays empty and I can't leave the edit area.. Same for my talk page. The wiki appears to be slow overall.