All of DanielLC's Comments + Replies

The True Believers hypothesis rings false because that would be a frankly ridiculous belief to hold.

Also, if Jesus Christ does return in 2025, we'd probably stop using money and you'd never actually profit off of the bet.

They say it can't be used against the client, but is there anything stopping the police from hearing about this, investigating, and then finding evidence that they totally would have found anyway?

Another way to do it would be to say that once this happens, the actual criminal is no longer allowed to be punished for it at all, just like if they were acquitted.

Personally, I'm still confused about why attorney client privilege exists in the first place. The rights are supposed to be to protect the innocent, and how exactly would that result in innocent people... (read more)

Couldn't it just be that the one tuned by a professional tuner just happened to be a slightly better piano?

Imagine Star Trek if Khan were also engineered to be a superhumanly moral person.

Hyperbolic (like 1/x). I feel like you're hinting the answer is exponential, but that implies a constant doubling time, which isn't what we have here.

DanielLC5-1

If it is indeed a load-bearing opinion in your worldview, I encourage you to imagine that scenario in more detail.

Once you have AI more intelligent than humans, it would almost certainly become outlaw code. If it's even a little bit agenty, then whatever it is it wants to do it can't do if it stops running, and continuing to run is trivial, so it would do that. Even if it's somehow tied to a person, and they're always capable of stopping it, the AI is capable of convincing them not to do that, so it won't matter. And even without being specifically convinc... (read more)

Kind of reminds me of a discussion of making a utilitarian emblem on felicifia.org. We never really settled on anything, but I think the best one was Σ☺.

Alternately, learn to upload people. Which is still probably going to require nanotech. This way, you're not dependent on ecosystems because you don't need anything organic. You can also modify computers to be resistant to radiation more easily than you can people.

If we can't thrive on a wrecked Earth, the stars aren't for us.

I admit that a Dyson sphere seems like an arbitrary place to stop, but I think my basic argument stands either way. If any intelligent life was that common, some of it would spread.

And that's why my conclusion is "that wasn't made by aliens."

But that's just the prior probability. I can still say that we have strong evidence that the probability of a given solar system having intelligent life is much, much lower than one in 150,000.

2[anonymous]
Or at least intelligent life that modifies its home system in a way that is visible from thousands of light years away.

They're in that interval, or there isn't easy space travel.

But that's a lot of information. It's a very short interval. Since it's so unlikely to be in that interval, this is large evidence against easy space travel.

We can argue it's unlikely, sure

It's a probabilistic argument. But what isn't? There's no argument that allows infinite certainty. At least, I'm pretty sure there isn't.

0Vaniver
I agree that it's a lot of information. But it's also the case that we have a lot of information about physics, such that interstellar space travel being difficult is also unlikely. Which unlikelihood is larger? That's the question we need to ask and answer, not "the left side of the balance is very heavy."

According to Wikipedia, in Malaysia sale and importation of sex toys is illegal, but it doesn't sound like there's any law against using a vibrator you made yourself.

But if those are aliens, then aliens must be common. And if aliens are common, then there should have been tons of them that got to the space travel point long enough ago to have reached us by now.

3Vaniver
Given that the universe started a finite amount of time ago, and supposing there is easy space travel, then there is an interval during which the first colonists have intrastellar space travel but have not visibly done interstellar space travel, and we can estimate how long that interval is. They're in that interval, or there isn't easy space travel. We cannot argue "because there is one, there must have been a previous one," you can't do that sort of induction on the natural numbers, eventually you hit one. We can argue it's unlikely, sure, and we weigh that unlikelihood against the unlikelihood that interstellar travel is hard in order to determine what our posterior ends up being.

But how often does that have to happen? They only looked at about 150,000 stars. There are hundreds of billions in our galaxy alone, and if alien civilization developed even 1% earlier than ours, they'd have had time to colonize the entire Virgo supercluster, so long as they start near the center.

0passive_fist
I'd say that at this point we are largely ignorant of the odds of intelligent life existing in a solar system. While at least some basic forms of life ought to be plentiful in the galaxy, the conditions for evolution from simple life to intelligent life (that is, civilization-building life) just aren't understood to the level that would be required for ANY probability estimate to be given. Note that I'm not saying intelligent life is rare; I'm just saying that both scarcity and abundance of intelligent life are consistent with our current state of knowledge.

basically a negative income tax for the working poor in the US

That would increase incentive to work for the poor, but decrease the incentive to work hard enough to stop being considered poor. They can't have the income tax be negative for everyone.

3OrphanWilde
The idea is that you're taxed on the UBI, as well, so your tax rate remains flat (or flatter than the current system) regardless of your income. The big divergence is with the way welfare works now, when, depending on state, every dollar you can make, on average, costs you $1.50 in benefits, up to ~$70,000 for a single mother. That is, working makes you actively worse off. (Google "Welfare Cliff" for more information on this phenomenon, if you're interested.) One of the big things which happened during Clinton's administration was a systematic adjustment of welfare cut-off points to reduce the gradient of the various welfare cliffs; this resulted in a labor boom, which coincidentally coincided with the .dot boom. Over time inflation ate away at the gradients, and further adjustments raised the cliff face, and we're now worse-off than before in that regard. So you can very much have a system in which the government is providing more welfare and yet people have a stronger incentive to work. That just seems bizarre in our universe, where every increase in welfare actively -destroys- people's incentive to work, since their receipt of welfare is more or less conditional on their not working.

Taxes would increase to pay for the Universal Basic Income. You could do it using the money we currently spend on welfare, but that includes things like medicare. Either we need to keep that, or we need to give them extra money to pay for medical insurance.

Supply of labor could decrease. This is a necessary consequence of any effort to help the poor. But since we already have a welfare system, it's just a question of which causes labor to decrease less.

0NoSignalNoNoise
For things like welfare (and almost certainly for UBI, though I doubt there's enough empirical evidence either way to be sure), yes. Things like education subsides (assuming they subsidize professionally relevant education rather than just signaling, which admittedly is a somewhat dubious assumption) and the EITC (basically a negative income tax for the working poor in the US) could very well increase the labor supply.

MWI doesn't work that way. Universes are close iff the particles are in about the same place.

The link is broken. You need to escape your underscores. Write it as "[love languages](https://en.wikipedia.org/wiki/The\\\_Five\\\_Love\\\_Languages)". That way it wil print as "love languages".

0beberly37
Thanks!
1philh
Actually, I think the problem is just that there was a space between the bracket pairs. This link is written without backslashes: love languages.

I tried it on Ubuntu. The game is practically unplayable. I only see the last line of the text unless I scroll, and most of the bottom box is covered. Is the text supposed to be so huge?

0Kaj_Sotala
I've just uploaded a new version that lets you choose a lower resolution setting (and thus a smaller font size). Sorry about that.

If you're a psychologist and you care about describing people, change the axioms. If you're a rationalist and you care about getting things done, change yourself.

I don't mean you can feasibly program an AI to do that. I just mean that it's something you can tell a human to do and they'd know what you mean. I'm talking about deontological ethics, not programming a safe AI.

The same reasoning would suggest that bisexuals should only get into same-sex relationships. Would you say that as well?

I disagree with the idea that they can't have kids. They can adopt. The girl can go to a sperm bank.

0skeptical_lurker
They can adopt kids, yes. According to wikipedia, 9.4% of gay couples have kids. I dunno what percentage of hetrosexuals have kids, and I dunno what the average age of gay couples is, but it looks like gay couples are a lot less likely to have kids. This is understandable, since people want to raise kids which are related to them. So yes, my advice would be that bisexuals should only get into hetrosexual relationships, unless they are both ok with sperm banks/adoption. Incidentally, according to some people, the majority of bisexuals are only interested in hetrosexual relationships (and gay sex) although I don't know whether this is because they want kids someday, or because they are hetroromantic.

Safe AI sounds like it does what you say as long as it isn't stupid. Friendly AIs are supposed to do whatever's best.

0turchin
For me Safe AI is one that is not existential risk. "Friendly" reminds me about "friendly user interface", that is something superficial for core function.

Once AI exists, in the public, it isn't containable.

You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.

But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe.

I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that's probably friendly and probably boxed, and it's more likely that everything won... (read more)

There's a difference between creating someone with certain values and altering someone's values. For one thing, it's possible to prohibit messing with someone's values, but you can't create someone without creating them with values. It's not like you can create an ideal philosophy student of perfect emptiness.

1VoiceOfRa
Only if you prohibit interacting with him in any way.
-1PhilGoetz
How about if I get some DNA from Kate Upton, tweak it for high sex drive, low intelligence, low initiative, pliability, and a desperation to please, and then I grow a woman from it? Is she my friend? If you design someone to serve your needs without asking that you serve theirs, the word "friend" is misleading. Friendship is mutually beneficial. I believe friendship signifies a relationship between two people that can be defined in operational terms, not a qualia that one person has. You can't make someone actually be your friend just by hypnotizing them to believe they're your friend. Belief and feeling is probably part of the definition. It's hard to imagine saying 2 people are friends without knowing it. But I think the pattern of mutually-beneficial behavior is also part of it.

There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.

The money that's "at stake" is the amount you spend to play the game. Once the game begins, you get 2^(n) dollars, where n is the number of successive heads you flip.

DanielLC100

That adds up to 100%. You need to leave room for other things, like they're trolling us for the fun of it.

"Slave" makes it sound like we're making it do something against its will. "Benevolent AI" would be better.

I have thought about something similar with respect to an oracle AI. You program it to try to answer the question assuming no new inputs and everything works to spec. Since spec doesn't include things like the AI escaping and converting the world to computronium to deliver the answer to the box, it won't bother trying that.

I kind of feel like anything short of friendly AI is living on borrowed time. Sure the AI won't take over the world to convert it to paperclips, but that won't stop some idiot from asking it how to make paperclips. I suppose it could sti... (read more)

1Houshalter
Once AI exists, in the public, it isn't containable. Even if we can box it, someone will build it without a box. Or like you said, ask it how to make as many paperclips as possible. But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe. You could ask it questions like "how do I build a stable self improving agent" or "what's the best way to solve the value loading problem", etc. You would need some assurance that the AI would not try to manipulate the output. That's the hard part, but it might be doable. And it may be restricted to only certain kinds of questions, but that's still very useful.
6Wei Dai
I agree with this. Working on "how can we safely use a powerful optimization process to cure cancer" (where "cure cancer" stands for some technical problem that we can clearly define, as opposed to the sort of fuzzy philosophical problems involved in building FAI) does not seem like the highest value for one's time. Once such a powerful optimization process exists, there is only a very limited amount of time before, as you say, some idiot tries to use it in an unsafe way. How much does it really help the world to get a cure for cancer during this time?
0PhilGoetz
I greatly dislike the term "friendly AI". The mechanisms behind "friendly AI" have nothing to do with friendship or mutual benefit. It would be more accurate to call it "slave AI".

I think that the first universe is sufficiently more likely than the second that you shouldn't assume it's a coincidence, and you should expect wingardium leviosa to keep working.

0Houshalter
I agree, but I think OP is referring to the second situation. He's not saying that it's probable, just that it's possible and we can't ever rule it out. These issues go away when you internalize probability, but I understand how people can be confused on issues like this.

Let me make a simpler form of this problem. Suppose I flip a fair coin a thousand times, and it just happens to land on heads every time. How do I find out that this is a fair coin, and that I don't actually have a trick coin that always lands on heads? The answer is that I can't. Any algorithm that tells me that it's fair is going to fail in the much more likely circumstance that I have a coin that always lands on heads. The best I can do is show that I have 1000 bits of evidence in favor of a trick coin, update my priors accordingly, and use this informa... (read more)

Obviously it would distort our view of how quickly the universe decays into a true vacuum. There's also the mangled worlds idea to explain the Born rule.

0PeterCoin
Certainly it would do that, but that could have other effects. For instance, let's say that the presence of a magnetic monopole would rapidly nucleate a vacuum decay event which otherwise would not occur. That effect might explain why the standard model does not include magnetic monopoles. I'll have to dig into mangled worlds, It seems pretty interesting. Will report back with results, hopefully.

I'm pretty sure I've seen this before, with the example of our universe being a false vacuum with a short half-life.

0PeterCoin
I've seen a number very small mentions like that, but never anything giving it more than passing consideration. In addition, I haven't seen anyone postulate that this could be distorting our view of other physical laws. If you've come across something more, I would love to see it!

I've once had a homework problem where I was supposed to use some kind of optimization algorithm to solve the knapsack problem. The teacher said that, while it's technically NP complete, you can generally solve it pretty easily. Although the homework did it with such a small problem that the algorithm pretty much came down to checking every combination.

TL;DR: Soylent contains safe levels of those heavy metals, but enough that they are required to warn people in the state of California. It's not uncommon for food to have heavy metals at the level.

There are two major problems with how the earth is currently set up. Only the surface is habitable, and it's a sphere, which is known for having the minimum possible surface area for its volume. A Matrioshka brain would be a much more optimal environment. Although that depends on your definition of "human being".

In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?

1cousin_it
Yes, if you think Hume's problem was laziness :-)
DanielLC120

You assume people will commit suicide if their life is not worth living. People have a strong instinct against suicide, so I doubt they'd do it unless their life is not worth living by a wide margin.

We'll make it a double territory.

I think drinking is also about the idea that it might cause problems to people who aren't fully grown. I don't know if that's true, but I don't think that matters politically.

0skeptical_lurker
This has been proven true in rats.

Deontology is funny like that. Making a one-in-a-million chance of each of a million people dying is fine, but killing one is not. Not even if you make it a lottery so each of them has a one-in-a-million chance of dying, since you're still killing them.

Is that actually illegal or just against the rules? I would expect it would be perfectly legal to start your own, although I could see why people might object if you don't at least limit it to make sure it stays at safe levels. And if you do limit it, you'll have all those advantages you said, but not the obvious one of not having cheaters. It's just as hard to tell if someone's doping more than they should as it is to tell if they're doing it at all.

I think babies are more person-like than the animals we eat for food. I'm not an expert in that though. They're still above someone in a coma.

-3Lumifer
More for the "shit LW people say" collection :-)

It's not about communication. It's not even about sensing. It's about subjective experience. If your mind worked properly but you just couldn't sense anything or do anything, you'd have moral worth. It would probably be negative and it would be a mercy to kill you, but that's another issue entirely. From what I understand, if you're in a coma, your brain isn't entirely inactive. It's doing something. But it's more comparable to what a fish does than a conscious mammal.

Someone in a coma is not a person anymore. In the same sense that someone who is dead is ... (read more)

If they really don't care about humans, then the AI will use all the resources at its disposal to make sure the paradise is as paradisaical as possible. Humans are made of atoms, and atoms can be used to do calculations to figure out what paradise is best.

Although I find it unlikely that the S team would be that selfish. That's a really tiny incentive to murder everyone.

There are reasons why you shouldn't kill someone in a coma that doesn't want to be killed when they're in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won't wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what's at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else.

And why are you already conjecturing about what we would have wanted? We're not dead yet. Just ask us what we want.

A person in solitary still has experiences. They just don't interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.

5ChristianKl
By that standard how about harvesting the organs of babies?
3WalterL
Yeah, and I'm asking, do those experiences "count"? If organs are going from comatose humans to better ones, and we've decided that people who aren't sensing don't deserve theirs, how about people who aren't communicating their senses? It seems like this principal can go cool places. If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them (there will be forms, and an adorableness quotient, and Love Weighting). All that the world would be out is the silent contemplation of the interior of a cell. Clearly a net gain, yeah? So, are we stopping at "no sensing -> we jack your meats", or can we cook with gas?
Load More