Comment author: twanvl 13 October 2015 08:28:57PM 7 points [-]

Why would the price of necessities rise?

There are three reasons why the price might go up: 1. demand increases 2. supply decreases 3. inflation

Right now, everyone is already consuming these necessities, so if UBI is introduced, demand will not go up. So 1 would not be true.

Supply could go down if enough people stop working. But if this reduces supply of the necessities, there is a strong incentive for people on just UBI to start working again. There is also increasing automation. So I find 2 unlikely.

That leaves 3, inflation. I am not an economist, but as far as I understand this shouldn't be a significant factor.

Comment author: DanielLC 14 October 2015 12:38:51AM 0 points [-]

Taxes would increase to pay for the Universal Basic Income. You could do it using the money we currently spend on welfare, but that includes things like medicare. Either we need to keep that, or we need to give them extra money to pay for medical insurance.

Supply of labor could decrease. This is a necessary consequence of any effort to help the poor. But since we already have a welfare system, it's just a question of which causes labor to decrease less.

Comment author: Craigus 05 October 2015 11:15:37PM *  1 point [-]

Potential crank warning; non-physicist proposing experiments. Sorry if I'm way off-base here, please let me know where I've gone wrong.

I was contemplating MWI and dark matter, and wondered if dark matter was just the gravitational influence of matter in other universes, where the other universes' matter is distributed differently to ours. Google tells me that others have proposed theories like this, but I can't find if anyone has ever tried to test it.

Has anyone ever tried to test this directly? We have gravimeters sensitive enough that one "detected the gradual increase in surface gravity as workmen cleared snow from its laboratory roof".

Imagine an experiment was run using a source of quantum-random binary data, with the protocol to move a large mass close to and further away from the gravimeter based on the quantum data. My expectation based on this theory is that the gravimeter would measure:

  • Classically move the mass away from the gravimeter: A baseline of gravitational influence (earth/buildings/etc)
  • Classically move the mass close to the gravimeter: The full gravity of the mass (baseline + mass).
  • Quantumly move the mass close to the gravimeter: Some of the gravity of the mass.
  • Quantumly move the mass away from the gravimeter: Some of the gravity of the mass.

The experimenters would want to repeat the quantum mass movements many times, so that as many universes as possible are able to measure both the 'close to' and 'further away' positions of the mass at least once. (If the experiment only did 5 measurements, 2 out of 32 universes would have their experiment be 'mass is always close' or 'mass is always further away', and therefore don't get the full benefit of the experiment.

Interestingly if this theory were true, experiments could be run where the gravimeter and mass are used to communicate between universes.

Comment author: DanielLC 06 October 2015 03:49:22AM 3 points [-]

MWI doesn't work that way. Universes are close iff the particles are in about the same place.

Comment author: beberly37 28 September 2015 05:24:26PM *  4 points [-]

This is an open question about a brain-hack.

I don't believe the concept of love languages is big on LW, but searching the forum leads to a few mentions of them. It not exactly a data-driven concept, but anecdotally, spending time and acts of service are effective ways to make me feel loved, while gifts and compliments are not (they actually usually make me feel uncomfortable).

The primary concept of the love languages book is to change the way you show love from what you prefer to what your partner prefers (ie if your main language is touch and you are always snuggling with your spouse, but their main language is services, they will feel unloved while you snuggle with them instead of doing the dishes, so you should make an effort to do the dishes instead of snuggling on the couch)

My question is, has anyone experienced or developed (or will develop, prompted by this comment) a method to change my love language priorities so I can feel more loved given current circumstances?

The small back story is; as a result of adding two kids and a real job and an alone-time-hungry-stay-at-home-mom wife time is very limited, which means quality time is at a premium, so I'm feeling unloved. It would be preferable to make more time exist, but that's unlikely, so I would like hack my brain to make me feel loved in other ways. Any ideas?

edited to add italics for clarity

edited 10/7/2015 to add cautionary update: It has been commented that there may be side effect to brain hacking. Two that almost immediately come up and are worth mentioning because they can be in direct opposition to the goal of feeling more loved are:

Nightly listing of all instances of signals of love results in real-time noticing of them (which is a plus, the "I can write about this later!" feeling), but this is coupled with real-time noticing of missed opportunities to show love (Why didn't she make me tea?)

There is a tendency (for me) to compare/notice list lengths from day to day. ie There are only 5 today and 15 yesterday [trombone sound]

Comment author: DanielLC 29 September 2015 05:01:02AM *  1 point [-]

The link is broken. You need to escape your underscores. Write it as "[love languages](https://en.wikipedia.org/wiki/The\_Five\_Love\_Languages)". That way it wil print as "love languages".

Comment author: DanielLC 23 September 2015 02:53:33AM 1 point [-]

I tried it on Ubuntu. The game is practically unplayable. I only see the last line of the text unless I scroll, and most of the bottom box is covered. Is the text supposed to be so huge?

Comment author: OrphanWilde 08 September 2015 05:05:44PM 4 points [-]

It's obvious that humans don't actually maximise a utility function; but according to the axioms, we should do so.

Given a choice between "change people" and "change axioms", I'd be inclined to change axioms.

Comment author: DanielLC 18 September 2015 08:16:34PM -1 points [-]

If you're a psychologist and you care about describing people, change the axioms. If you're a rationalist and you care about getting things done, change yourself.

Comment author: VoiceOfRa 29 August 2015 07:42:54PM 3 points [-]

For one thing, it's possible to prohibit messing with someone's values

Only if you prohibit interacting with him in any way.

Comment author: DanielLC 30 August 2015 01:04:40AM -1 points [-]

I don't mean you can feasibly program an AI to do that. I just mean that it's something you can tell a human to do and they'd know what you mean. I'm talking about deontological ethics, not programming a safe AI.

In response to comment by [deleted] on Open Thread - Aug 24 - Aug 30
Comment author: skeptical_lurker 24 August 2015 06:40:50PM *  1 point [-]

Even though contraception is widespread in this day and age, if large number of siblings have sex with each other, some of them will inevitably end up having kids.

A second problem is that the energy and emotions and time they devote to their incestuous relationship isn't going to a relationship where they might have kids.

5 years ago, I would have thought logically, and said that if they don't want kids and have access to effective contraception then it isn't a problem. But now I would think probabilistically, and say that even if they are 99% sure they don't want kids, they are about 50% likely (assuming standard levels of overconfidance) to change their minds around 30, and now they are really heavily invested in a relationship which cannot lead to healthy kids, and the sister's biological clock is running out of time.

So, it's certainly a bad idea, although that doesn't automatically mean it should be made illegal, depending upon whether you believe citizens should have the right to make bad decisions.

Comment author: DanielLC 27 August 2015 06:15:44AM 2 points [-]

The same reasoning would suggest that bisexuals should only get into same-sex relationships. Would you say that as well?

I disagree with the idea that they can't have kids. They can adopt. The girl can go to a sperm bank.

Comment author: turchin 25 August 2015 07:48:03PM 3 points [-]

I prefer term "Safe AI" as it more self explaining for the outsider.

Comment author: DanielLC 26 August 2015 10:04:06PM 0 points [-]

Safe AI sounds like it does what you say as long as it isn't stupid. Friendly AIs are supposed to do whatever's best.

Comment author: Houshalter 26 August 2015 07:33:28AM 1 point [-]

Once AI exists, in the public, it isn't containable. Even if we can box it, someone will build it without a box. Or like you said, ask it how to make as many paperclips as possible.

But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe. You could ask it questions like "how do I build a stable self improving agent" or "what's the best way to solve the value loading problem", etc.

You would need some assurance that the AI would not try to manipulate the output. That's the hard part, but it might be doable. And it may be restricted to only certain kinds of questions, but that's still very useful.

Comment author: DanielLC 26 August 2015 10:02:58PM 0 points [-]

Once AI exists, in the public, it isn't containable.

You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.

But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe.

I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that's probably friendly and probably boxed, and it's more likely that everything won't go horribly wrong.

You would need some assurance that the AI would not try to manipulate the output.

Manipulate it to do what? The idea behind mine is that the AI only cares about answering the questions you pose it given that it has no inputs and everything operates to spec. I suppose it might try to do things to guarantee that it operates to spec, but it's supposed to be assuming that.

Comment author: PhilGoetz 26 August 2015 01:09:31AM 1 point [-]

So if I lock you up in my house, and you try to run away, so I give you a lobotomy so that now you don't run away, we've thereby become friends?

Comment author: DanielLC 26 August 2015 09:58:25PM 0 points [-]

There's a difference between creating someone with certain values and altering someone's values. For one thing, it's possible to prohibit messing with someone's values, but you can't create someone without creating them with values. It's not like you can create an ideal philosophy student of perfect emptiness.

View more: Prev | Next