All of aleksiL's Comments + Replies

You worry about that all-important status when you fear losing it.

Want to win? Then focus on winning, not on not-losing. You need to if you want to be seen as high-status, anyway. Fear of loss is low-status, so is worrying about what others think.

Navigate the minefield, sure. But do it from a position of strength, not of weakness.

1[anonymous]
Of course. There are two relevant terms that I learned in another language, one way to translate them would be to "seek success" or "avoid failure". Seeking success is pursuing your dream job, avoiding failure is fearing you will not be able to pay bills so accepting any job. Seeking success is far better, but if you are not blessed with sky high testosterone and are thus timid and not driven, you cannot really do much more than avoiding failure. It is not exactly a choice you can make, it is more about what you are. Of course you can try to slowly change what you are i.e. work on developing courage. Wanting to win is in itself a keyword used by the success oriented, who believe they can be / can do better than others. The failure-avoidant want to not prove worse than others, and thus seek to lose, not win. It takes a really lot of working on courage to go from one to another and it is not clear what methods develop this kind of courage best. Maybe this (courage or self-confidence methods) would deserve a top level.

Harry pulled the trigger. Bang or click?

What happens if you AK someone keyed to the horcrux 2.0 network?

Prediction: If Hermione is AK'd, her soul will be shunted to the network. There will be no death burst and Voldemort's horcruxing attempt fails. Then things get interesting.

0DanArmak
They are now outside the castle. Quirrel should just Apparate away, kill a random wizard, create a horcrux, and take his network back.

Correct me if I'm wrong, but there seem to be two separate challenges on the Potions room parchment: a simple one consistent with canon and the skills and abilities of the target audience, and a complex one requiring an hour or so of careful and precise work. Looks like Harry and Quirrelmort focus exclusively on the long formula, ignoring the puzzle.

On rereading the relevant part of Ch. 107, it appears that Harry has an idea he doesn't want to share shortly after the broomstick conversation. On a close reading, it appears that he manages to avoid the topic... (read more)

4b_sen
Ooooh. That may well be a challenge that behaves differently around people who were or weren't supposed to be there. Perhaps solving the logic puzzle is sufficient to gain entry, and most students will either not think to flip the parchment over or get bored before they finish brewing the potion it describes. But Snape expects Voldemort to notice the lack of wards and decide to brew the potion, and accordingly has booby-trapped the potion in some manner that triggers upon its completion or after some time sufficient that he expects students to get bored first. The duration may well also be significant for Time-Turning.

Hmm. How about having someone else die in Hermione's place?

I don't recall offhand if the death burst was recognizable as Hermione, but otherwise it seems doable. Dumbledore said he felt a student die and only realized it was Hermione once he saw her.

You'd need polyjuice for the visual appearance, and either Hermione's presence or a fake Patronus for past-Harry to follow. Hermione is unlikely to go along with the plan willingly sho she'd need to be tricked or incapacitated. Hard to tell which would be easier.

Given the last words, Hermione's doppelganger mig... (read more)

1robryk
Either Dumbledore is on it and lied to Harry, or it was a student. Harry seemed to think so, but he was obviously biased by seeing Hermione. Doesn't it wear off after death? Ch92 spoiler: Ur'f nyvir ng qvaare naq ur unf ab gvzr-gheare (Uneel pna'g gnxr bgure crbcyr nybat), fb vg qbrfa'g frrz vg unq unccrarq. Overall, this plan requires at least 2 hard things to happen correctly: identical fake magic burst and getting real Hermione there and screaming or Patronus shenanigans. I disbelieve this strongly.

Lesswrongers are surprised by this? It appears figuring out metabolism and nutrition is harder than I thought.

I believe that obesity is a problem of metabolic regulation, not overeating, and this result seems to support my belief. Restricting calories to regulate your weight is akin to opening the fridge door to regulate its temperature. It might work for a while but in the long run you'll end up breaking both your fridge and your budget. Far better to figure out how to adjust the termostat.

Some of the things that upregulate your fat set point are a histor... (read more)

I'm pretty sure "trying to eat less" is exactly the wrong thing to do. Calorie restriction just triggers the starvation response which makes things worse in the long run.

Change what you eat, not how much.

-2A1987dM
I've replaced “eat less” with “lose weight” because I don't want to go into this, but see Lumifer's reply.
0Lumifer
Physics is still relevant. The only way to lose weight (outside of surgery) is to spend more energy than you take in. The problem, of course, is that your energy intake and your energy output are functions of each other plus a lot of other things besides (including what's on your mind). I still think that for most people (aka with an uninformative prior) the advice of "Eat less, move more" is a good starting point. Given more data, adjust as needed.
-1wedrifid
I'm pretty sure "Force feeding yourself as much fat as you can keep down with the aid of anti-emetics, taking glucose intravenously while injecting insulin, estrogen and testosterone and taking a β2 antagonist" is closer to "exactly the wrong thing to do".

You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.

Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.

0Metus
Have you read my reply to RichardKennaway? I explicitly look at the case you mention.

Have you checked the house for mold? The night terrors seem pretty well-explained by mycotoxins and the odds of the other weirdness also go up if something is screwing with your biochemistry.

4Intrism
Allow me to add an experimental test to this: try sleeping in a well-ventilated room (eg. multiple windows open, box fan in a window) and see if incidents decrease.

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value.

The two can't be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.

-1JonatasMueller
What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory. They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of them correct and not the other? Could both values be correct? The issue here is questioning if any values whatsoever can be validly held for similar beings, or if a good justification is needed. In CEV, Eliezer proposed that we can make errors about our values, and that they should be extrapolated for the reasonings we would make if we had higher intelligence.

Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.

3Richard_Kennaway
Or do they want to be like those folks appear to be like?
4roystgnr
I don't like the study setup there. One readthrough of spoiled vs one readthrough of unspoiled material lets you compare the participants' hedonic ratings of dramatic irony vs mystery, and it's quite reasonable that the former would be equally or more enjoyable... but unlike in the study, in real life unspoiled material can be read twice: the first time for the mystery, then the second time for the dramatic irony; with spoiled material you only get the latter.
5A1987dM
Those error bars look large enough that I could still be right about myself even without being a total freak.

According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.

How would this encourage them to actually value logic and evidence instead of just appearing to do so?

2magfrump
I think the most common human tactic for appearing to care is to lie to themselves about caring until they actually believe they care; once this is in place they keep up appearances by actually caring if anyone is looking, and if people look often enough this just becomes actually caring.

The subject's capacity for deception is finite, and will be needed elsewhere. Sooner or later it becomes more cost-effective for the sincere belief to change.

6Omegaile
People tend to conform to it's peers values.
4ChristianKl
It's not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.
2HalMorris
Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).
0HalMorris
Couple of attempts: The hard sciences Professions with a professional code of ethics, and consequences for violating it.

Do you think continuous spatial + temporal dimensions have problems continuous spatial dimensions lack? If so, what and why?

0Pentashagon
It may be that I don't have a good understanding of quantum mechanics. In Newtonian mechanics the state of the universe is dependent on the prior position and velocity of and forces on all the particles. The velocity and forces are both expressed in terms of the derivative of time so if time was removed from the equations Zeno's paradox would imply that either nothing could ever move or that motion was discontinuous whenever the next state of the universe was calculated. From browsing wikiepdia it looks like that there are time-dependent as well as time-independent Schrödinger equations used for moving and stationary states, respectively. Is it actually possible to express the entire universe as a single time-independent equation? If so, does that mean that what we actually experience at any "time" is just a random sample from the steady-state probability distribution? Does that mean we should always expect the universe to tend toward some specific distribution (maybe just the heat death)?

Wouldn't the failure to acknowledge all the excitement nuclear war would cause be an example of the horns effect?

I immediately answered no and rated everyone who said yes as completely undateable

I can understand answering no for emotional or political reasons, but rating the epistemically correct answer as undateable? That's... a good reason for me to answer such questions honestly, actually.

2jooyous
So they have a mechanism for you to write an explanatory comment, right? But they don't allow you to filter on the existence of an explanatory comment, which would allow someone to explain their thought process -- which I think is really necessary because "exciting" does strongly connote "good idea" the way "awesome" does. In which case, I would expect a person trying to avoid the horns effect to just refuse to answer the question on the grounds that it's misleading as a moral compass gauge because answering "yes" might cause them to get filtered out because you can't condition on the existence of an explanation. So I expected most "yes" answers to be generally unaware people. I don't think that question was intended for rational arguments for or against nuclear war; I think it was intended for ... morality. I admit "completely undateable" is an exaggeration, but I think I decided engaging that question was a red flag for immaturity. But that's why I'm really confused why that question was there in the first place because it doesn't distinguish those two groups of people -- the ones that are thinking really really carefully and the ones that aren't thinking at all. It's bad for morality!

Given you have enemies you hate deeply enough? Yes.

Having such enemies in the first place? Definitely not.

Having such enemies in the first place? Definitely not.

There are entire cultural systems of tracking prestige based around having such enemies; the vestiges of them survive today as modern "macho" culture. Having enemies to crush mercilessly, and then doing so, is an excellent way to signal power to third parties.

I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.

Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.

Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude.

I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significant... (read more)

2sbenthall
Wait...one sec. Isn't all that redundancy in human society a good thing, from the perspective of saving it from existential risk? If I were an AI, wouldn't one of the first things I do be to create a lot of redundant subsystems loosely coordinating in some way, so that if have of me is destroyed, the rest lives on?
2sbenthall
It looks to me like there's a continuum within organizations as to whether they do most of their information processing using hardware or wetware. I acknowledge that improvements in machine intelligence may shift the burden of things to machines. But I don't think that changes the fact that many organizations already are superintelligences, and are in the process of cognitively enhancing themselves. I guess I'd argue that organizations, in pursuit of cognitive enhancement, would coordinate their human and machine subsystems as efficiently as possible. There are certainly cases where specialists are taken care of by their organizations (ever visited a Google office, for example?). While there may be overlap in skills, there's also lots of heterogeneity in society that reflects, at least in part, economic constraints.

Do humans have goals in this sense? Our subsystems seem to conflict often enough.

0Bruno_Coelho
We have goals, but they are not consistent over time. The worries about artificial agents(with more power) is that, these values if bad implemented, would create losses we could not accept, like extinction.
0hairyfigment
In this case it doesn't seem like much of a conflict. I think that barring more-or-less obvious signs of disarray we can count on organizations trying to serve their leaders' self-perceived interests - which, while evil, entail not killing humanity - unless and until the singularity changes the game.

An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.

Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.

Intersystem communication is horrendously inefficient in o... (read more)

1Viliam_Bur
In a company large enough, the humans would be like the cells, and the departments would be the subsystems. The functional difference between e.g. the accounting department and the private security department can be big, even if both are composed of biologically almost the same homo sapiens individuals. When comparing the speed of organizations with speed of humans, on different scales the speed comparison can be different. As an analogy, a bacterium can reproduce faster than a human, but a human will write a book faster. Similarly, humans can do many things faster than organizations, but some other things are just out of reach for an individual human without an organization of some kind. I would say that today, humans are relatively advanced in the human-space, shaped by biological evolution and culture for a long long time. Compared with that, organizations seem rather primitive and fragile in the organization-space. Yet even today the organizations can do things that individual humans can't. It is like looking at the first multi-cellular organisms and deciding that although they have some small advantages over the single-cellular ones, they are not impressive enough.
4sbenthall
I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology. Take a modern company with a broad reach. The convenience store, CVS, say. Yes, there is a big organizational hierarchy staffed by people. But there is also a massive data collecting and business intelligence aspect. Every time they try to get you to swipe your CVS card when you buy toothpaste, they are collecting information which they then mine for patterns on how they stock shelves and price things. That's just business. It's also a sophisticated execution of intelligence that is far beyond the capacity of an individual person. I don't understand your point about specialization. Can you elaborate? Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.

One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.

That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).

I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history.

This in my opinion proves that memory sticks with the branch my consciousness is in.

Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't ... (read more)

Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.

You could get around this by forking the time traveler with the universe: in the source universe it would simply appear that the attempted time travel didn't work.

That would create a new problem, though: you'd never see anyone leave a timeline but every attempt would result in the creation of a new one with a copy of the traveler added at the dest... (read more)

Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him.

I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.

1MaoShan
It's as if your own body is a guy that does his job if you train him right, but makes stupid decisions when something unexpected happens. I just take a more literal approach with the interaction. I also refer to him as "my answering machine" when I am woken up in the middle of the night. It took my wife a while to realize that the person she was talking to was "not me". My answering machine can make perfectly normal-sounding replies to normal questions, but is unable to come up with creative answers to unusual questions, and I have no memory of the events. Another unnamed, possibly separate module runs when my body is alarmed, but I am not yet conscious. It constantly asks for data, verbally questioning other humans nearby, "What is happening? What is going on? What time is it?" Unlike situations with the answering machine, I retain conscious memory of the occurrence, but not from a first-person perspective, more like I remember somebody telling me about what happened, but in this case that person was (allegedly) me.

Agree. You use process X to determine the setup and agents instantiating X are going to be constrained. Any decision theory would be at a disadvantage when singled out like this.

I get the feeling that if Harry learns the Killing Curse he'll manage to tweak it somehow, on the order of Patronus 2.0 or partial Transfiguration.

I arrived at this idea by intuition - it seems to fit, but I don't think there's much explicit support. AFAICT I'm mostly pattern-matching on story logic, AK's plot significance and symmetry with Patronus, and Harry's talent for breaking things by thinking at them.

I think my probability estimate for this (given that Harry learns AK in the first place) is around 30%, but I suspect I'm poorly calibrated.

175th
Interesting. I'm finding it hard to imagine what a "True Killing Curse" would do differently; the Standard Killing Curse seems to leave things pretty much good and dead. Perhaps it would kill Phoenixes permanently? Offing Fawkes would be a nice Yudkowskian punch in the gut. Or maybe it would kill all of the victim's horcruxes as well? But it'd be a drag if Eliezer introduced the Cvbarre ubepehk only to have Harry discover a shortcut that makes him not have to deal with it.

I've been meditating for about two weeks now, and been progressing surprisingly quickly. Concentration came easy, and I started having interesting experiences pretty much straight away. I'd like to share my latest sitting and hopefully get some input.

I sat cross-legged on my couch and started concentrating on my breath as usual. Soon there was a discontinuity: my concentration lapsed, it felt as if my attention was fully in a nonsensical, dreamlike thought for just a second and suddenly I was in a clearer, lighter, easier state. It's happened similarly sev... (read more)

My biggest problem when meditating is that when I focus on my breath, I switch to breathing consciously[...]

I've started to suspect that this difficulty is actually a feature. Observing without interfering seems like an important skill to learn if the goal is to be more aware of your thoughts and actions in general.

Imagine, say, being consciously aware of every detail of your leg movements while walking; it becomes a lot more difficult if you don't know how to stay out of your own way.

It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood?

The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing.

Was this helpful?

Another data point: I wasn't too bothered with the general sales-pitchiness of the first two posts, possibly because I've occasionally gained useful knowledge by reading actual sales pitches from the self-help crowd.

That said, you had me hooked by the third paragraph of Part I and I've been going "get to the POINT already" since then. I do see some value in personal testimony, but it should be far more condensed.

You seem to currently have exactly one downvoted comment outside the HPMOR discussion and that at only -1. What makes you think the effects you see aren't simply a result of people actively participating in these threads noticing and responding to comments they deem poorly supported? No following around required.

As for the downvotes, I suspect an overwelming majority of them result from your adversarial reactions to criticism, not the HPMOR content. How many downvotes had this received before you added this edit?

What the hell with the random neg reps, se

... (read more)
0thomblake
You should remove the word "EDIT" from your quote, as it makes the comment harder to parse and isn't necessary in context.

Here's a vote for not-mind-reading. This seems deliberately written to suggest Quirrell's reacting to body language, not thought:

Without any conscious decision, she shifted her weight to the other foot, her body moving away from the Defense Professor -

"So you think I am the one responsible?" said Professor Quirrell.

What's the in-story justification for the dementor's presence anyway? I thought it seemed awfully convenient in case Harry decided to demonstrate his Patronus 2.0 but I couldn't figure out how it'd help enough.

I'd forgotten about the potential for ruining others' patronuses, though. That makes a lot more sense, especially considering he'd just reached into his dark side - possibly deeper than he'd ever willingly done before.

My guess: it wouldn't be enough at this point to just demonstrate a superior patronus or tell people about the possibility of ruining ... (read more)

4DanArmak
To protect Wizengamot members from dangerous criminals brought before them.

Both of those seem to fit the pattern perfectly when you consider evolution as an actor.

Maybe we should be discussing optimization power instead of intelligence; evolution seems a pretty decent manipulator considering how stupid it is.

2Pavitra
Yes, that's what I had in mind. Optimization power; strength of causation from desire to effect.

An interesting post. I immediately thought of asking "What habits would I adopt if the long-term effects were in full force immediately?"

I think I have some thinking to do.

Edit: typo.

1CraigMichael
I like this phrasing.

(ch56)

Has the nature of Harry's mysterious dark side been established yet? If not, the latest chapter gives a strong hint toward it being a shard of Voldemort.

In chapter 56, Harry discovers that his vulnerability to Dementors is due to his dark side's fear of death. And, back in chapter 39, in the discussion between Harry and Dumbledore it was suggested that Voldemort was motivated by fear of death. Not quite proof, but interesting nonetheless.

2hairyfigment
"Isn't that like having a coitus fetish?" And of course Harry heard his parents die at the hands of someone who wanted to kill him, after which he arguably suppressed the memory. Sure, canon!Harry doesn't fear death as much, but the Sorting Hat told MoR!Harry that going to Gryffindor or Hufflepuff would change him. The best evidence here for your theory lies in the fact that he managed to give himself that kind of acceptance and friendship.
2Danylo
I wonder if Harry's dark side (assuming it's part of Voldermort) knows it's part of Voldermort. If it doesn't, what will happen to it when it/Harry finds out? Obviously Harry considers Voldermort an enemy. Will he try to destroy it?

That was beautiful. And funny. I don't think I've ever laughed and cried simultaneously before. Not at the same thing anyway.

Just... wow.

5Eliezer Yudkowsky
I printed that out and put it on my bedroom wall at one point.

Where do people get this "no depth cues" claim? The way her lifted leg moves suggests so obviously a clockwise motion that even Alicorn's link can't make my brain see the counterclockwise motion for longer than a round or two at most.

I mean, the only way the perspective makes any sense is if the lifted leg is furthest away when it's the highest up in the 2d image. Yes, the shadow/reflection is all wrong but for some reason my brain just refuses to give that priority.

What cues do others use? I'd love to see variations of this image with different ... (read more)

1Morendil
The Wikipedia page has more info, including a breakdown into individual frames. It looks as if it shouldn't be too hard to use an animated GIF editor to modify the original, adding clues as desired.

Sounds like your definition of "well-socialized" is closer to "well-adjusted" than RobinZ's.

As I understand them, skill in navigating social situations, epistemic rationality and psychological well-being are all separate features. They do seem to correlate, but the causal influences are not obvious.

ETA: Depends a lot on the standard you use, too. RobinZ is probably correct if you look at the upper quartile but less so for the 99th percentile.

0RobinZ
As an aside, I would say that jimrandomh's point relies upon describing a substantial population - more like the set of those above the upper quartile than those above the 99th percentile.

In short, I used to believe that social skills are a talent you're born with, not a skill to be developed. Luckily just being around people and paying attention improved my eye for social cues enough that I eventually noticed.

This relates to Carol S. Dweck's book Mindset, which I've mentioned before. I'm thinking of writing more about it sometime soon.

It becomes a bit less surprising when you consider that I attribute my low score mostly to relatively recent changes in my social skills and preferences, and the criteria I checked were the ones about all-absorbing narrow interests and imposition of routines and interests. As a matter of fact, the changes I mention came about as a result of months of near-obsessive study and accidental practice. (I did not grasp the importance of practice at the time but that's a subject for another comment.)

Maybe I wasn't that far toward the autism end of the spectrum to begin with but it does make me wonder just how much others could improve their social skills given the right circumstances.

0Blueberry
I'd love to hear more about this.

I originally looked at the poll but didn't answer until now.

I fit two of the Gillberg diagnostic criteria and scored 19 on the wired test. I would've definitely scored much higher a few years back when I suspected I might have asperger's. My social skills have developed a lot since then and I'm now more inclined to attribute my social deficiencies to lack of practice than anything else.

For what it's worth, I do seem to follow a pattern of intense pursuit of relatively few interests that change over time.

0Jack
Huh. It is interesting that you scored only 19 (almost 8 lower than me and pretty close to the mean) and still checked two of the Gillberg criteria (they seemed much too strongly put to describe me).

(Note: This post is speculation based on memory and introspection and possibly completely mistaken. Any help in clarifying my thinking and gathering evidence on this would be greatly appreciated.)

I suspect that I'm also affected by this and just haven't conciously noticed. Feels like I'm a lot more comfortable with analytical modes than more intuitive/social ones and probably spending more time inducing them than I should.

I'd like to be more aware of my mental modes and find more effective ways of influencing them. Any suggestions?

ETA: Now that I think abo... (read more)

2NancyLebovitz
You might want to take a look at Open Focus-- the premise as I understand it is that if you cultivate the ability to have slow synchronized brain waves, you spontaneously get better at having the appropriate sort of mental focus for what you're doing. I've worked with it a little, and gotten better body awareness, and probably some psychological gains. It's hard to judge exactly what of the assorted things I'm doing have particular good effects, but Open Focus is scientifically based and working with their attention exercises has some short term good effects for me.

Blow up the paradox-causing FTL? Sounds like that could be weaponized.

I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a "Relativity and FTL travel" FAQ.

I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?

2Baughn
I think the only possible answer to that is "through play-testing". As I understand it, real-life wormhole physics gives enormous advantages to a defender. However, this is a wargame, so I will have to limit that somewhat. Exactly how, and to what degree - well, that's something I will be confronting in a year or two. (And yes, it could be weaponized. Doing so might not be a good idea, depending on the lovecraft parameter, but you can certainly try.)

Do you mean these metaanalyses?

1taw
Yes. Even if PhilGoetz is correct that harmfulness was an artifact, there's still essentially zero evidence for benefits of eating more vitamins than RDA.

Interesting. I thought that my thinking would be mostly words, like inner monologue or talking to myself. Now that I pay attention it is more like images, emotions, concepts constantly flashing through my head, most gone before I even notice them.

Introspectively it seems that my thinking has changed and I just haven't noticed until now. Or that my conscious mind has finally learned to shut up and pay attention.

aversion to discomfort

This made me think of what pjeby calls the pain brain. In short, our actions can be motivated by either getting closer to what we want (pull) or away from what we try to avoid (push). Generally, push overrides pull, so you may not even notice what you want if you're too busy avoiding what you don't.

It may be useful to explore your goals and motivations with relaxed mental inquiry and critically examine any fears or worries that may come up.

I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.

The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit... (read more)

0RobinZ
I'm no fan of joke religions - even the serious joke religions - but the Church of the SubGenius promoted the idea of the "Short Duration Personal Savior" as a mind-hack. I like that one. (No opinion on the book - haven't read it.)

How do I know I'm not simulated by the AI to determine my reactions to different escape attempts? How much computing power does it have? Do I have access to its internals?

The situation seems somewhat underspecified to give a definite answer, but given the stakes I'd err on the side of terminating the AI with extreme prejudice. Bonus points if I can figure out a safe way to retain information on its goals so I can make sure the future contains as little utility for it as feasible.

The utility-minimizing part may be an overreaction but it does give me an idea: Maybe we should also cooperate with an unfriendly AI to such an extent that it's better for it to negotiate instead of escaping and taking over the universe.

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".

4Unknowns
Precisely. So it was luck that we instantiate this algorithm, instead of a different one.