Posts

Sorted by New

Wiki Contributions

Comments

What is the relationship between housing characteristics and the elements of flourishing?

http://lesswrong.com/lw/7am/rational_home_buying/ Does this help?

Other things that come to mind: being able to walk to places, lack of little things that take more mental energy than they should (on street alternate parking is one of those for me).

Your housing should make it easy and enjoyable to do things you value. Live near a gym or a beautiful park if you want to exercise more. Make sure the kitchen is decent if you want to eat out less. I know that socializing is good for me, but I'm bad about making plans and starting conversations. So I live with introverted, nerdy roommates (the sort of people I get along with best), and I'm trying to move to a nearby neighborhood where people hang out and talk outdoors a lot.

Your housing should not make you stressed about money. For most people, it's their largest budget category, and not very flexible. The common wisdom is that housing plus debt payments should be less than 1/3 of your income (with possible exceptions if you rent in an expensive city). If you can go lower than this without sacrificing too much, I'd say do it - having extra cash is better for human thriving than fancy housing. (Possible ways to turn cash into thriving: travel, take unpaid vacation or time between jobs to work on a side project, visit far away friends, be able to walk away from a job or living situation that becomes terrible without lack of money stopping you.)

This probably isn't anything you didn't already know, but since no one else responded - you might try Hacker News, to run it by startup-interested people.

I can't up vote because I'm new, but I wanted to say that in addition to being a good insight, I really liked your use of examples. The diagrams, and the agents' human-like traits (like being in only one place at one time) made the article more accessible.

Also, do not forget how the body influences the brain. Just look back on what happened to you during puberty, when sex desire overwhelmed you, making you impossible to remain calm. This happened thanks to chemicals, but it's still very interesting to see how a single chemical can have a huge influence on your consciousness.

This sometimes falls by the wayside in discussions of whole brain emulation, but I think it's really interesting. I talked to a transgender person once, who said that she felt like a different person while taking hormones vs. not taking them, to the point that her memories of times she was off her medication felt like someone else's memories, or a past life. Brain emulation can probably simulate this somehow, and it's probably also more configurable than stuff within the brain.

Which opens up some interesting possibilities. People with emulated brains would have better control over this sort of thing than today's bio humans do. They could adjust the chemical inputs to be the best (in their opinion) version of themselves - energetic, focused, patient, and never craving caffine. And then maybe they'd want to experiment with more unusual chemical settings, and end up with a very different personality than before. Are they still the same person, after going from gloomy to peppy, or from iritable to serene? Does having this much control make them less human-like?

This is further complicated by the possibility of improvements in hormone and psychiatry medications for bio humans. If everyone could and occasionally did change their daily supplements in ways that made them feel like a different person, would we be less "human" while still biological?

I've been thinking about alternative reasons why people living in rich neighborhoods of poor counties are happier.

Maybe the happiness-promoting physical qualities of neighborhoods (green space, lack of noise, feeling safe) correlate with income when they vary between counties, but not when they vary within counties.

I'd expect the poorest part of Pittsburgh to be about equal to the poorest part of northern New Jersey, and the same for the richest parts. (Perhaps less fancy, but I suspect granite doesn't affect happiness that much.) The New Jersey county is more expensive because it's near high paying NYC jobs, not because it's that much nicer.

People move to neighborhoods based on niceness, but counties based on job proximity (mostly). The market reflects this, by putting a premium on job availability but not other county-wide traits, like weather. (If this wasn't true, I'd expect southern US real estate to be more expensive in relation to average income than northern real estate.)

For reference:

Be careful not to practice your righteousness in front of others to be seen by them. If you do, you will have no reward from your Father in heaven. So when you give to the needy, do not announce it with trumpets, as the hypocrites do in the synagogues and on the streets, to be honored by others. Truly I tell you, they have received their reward in full. 3 But when you give to the needy, do not let your left hand know what your right hand is doing, 4 so that your giving may be in secret. Then your Father, who sees what is done in secret, will reward you.

The explanation I heard at church was that the "hypocrites in the synagogues" would act charitable just to get the social status associated with it, but a really chariable person would want to be charitable even if they had to hide it.

I'm not completely clear on who was supposed to benefit from hiding charity. The giver, because they'd be sure they were doing good for the right reason? Or the community in general, because tolerating people who give for signalling purposes would have caused some kind of harm?

I think it's most likely that this is either virtue ethics (so the giver can be sure they're a good person), or an argument from asthetics - getting social status makes charity less asthetic.

Would an AI that simulates a physical human brain be less prone to FOOM than a human-level AI that doesn't bother simulating neurons?

It sounds like it might be harder for such an AI to foom, since it would have to understand the physical brain well enough before it could improve on its' simulated version. If such an AI exists at all, that knowedge would probably be available somewhere, so it could still happen if you simulated someone smart enough to learn it (or simulated one of the people who helped build it). The AI should at least be boxable if it doesn't know much about neurology or programming, though.

Maybe the catch is that a boxed human simulation that can't self-modify isn't very useful. It'd be good as assistive technology or immortality, but you probably can't learn much about any other kind of AI by studying a simulated human. (The things you could learn from it, are mostly ones you could learn about as easily from studying a physical human.)

I tried to brainstorm what they might be thinking.

  • MIRI is making a mistake that means its' work is useless
  • MIRI won't decrease AI risk unless some other intervention is done first (there is a rerequisite)
  • We're doomed, resistance is futile
  • Other people wll fund it if they wait (seems unlikely, if the amount required is trivial to them)
  • They have political/strategic reasons not to be associated with MIRI (if they contribute anonymously, there's still the risk that other donors will disappear and they'll be stuck supporting it indefinitely)
  • They'd rather work on the probem wiht their own organization, because of reasons

That's a good way of describing how the difference in my own thinking felt - when I was Christian I had enough of a framework to try to do things, but they weren't really working. (It's not a very good framework for working toward utilitarian values in.) Then I bumbled around for a couple years without much direction. LW gave me a framework again, and it was one that worked a lot better for my goals.

I'm not sure I can say the same thing about other people, though, so we might not be talking about the same thing. (Though I tend not to pay as much attention to the intelligence or "level" of others as much as most people seem to, so it might just be that.)

The one improvement that I'm fairly certain I can contribute to lesswrong/HPMOR/etc is getting better at morality. First, being introduced to and convinced up utilitarianism helped me get a grip on how to reason about ethics. Realizing that morality and "what I want the world to be like, when I'm at my best" are really similar, possibly the same thing, was also helpful. (And from there, HPMOR's slytherins and the parts of objectivism that EAs tend to like were the last couple ideas I needed to learn how to have actual self esteem.)

But as to the kinds of improvements you're interested in. I'm better at thinking strategically, often just from using some estimation in decision making. (If I built this product, how many people would I have to sell it to at what price to make it worth my time? Often results in not building the thing.) But the time since I discovered lesswrong included my last two years of college and listening to startup podcasts to cope with a boring internship, so it's hard to attribute credit.

My memory isn't better, but I haven't gone out of my way to improve it. I'm pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is. The sanity waterline is already pretty high in programming, so practicing and following best practices is more efficient than trying to work them out yourself from first principles.

It didn't surprise me at all to see that someone had made a post asking this question. The sequences are a bit over-hyped, in that they suggest that rationality might make the reader a super-human and then it usually doesn't happen. I think I still got a lot of useful brain-tools from them, though. It's like a videogame that was advertiesd as the game to end all games, and then it turns out to just be a very good game with a decent chance of becoming a classic. (For the record, my expectations didn't go quite that high, that I can remember, but it's not surprising that some peoples' do. It's possible mine did and I just take disappointment really well.)

Load More