All of amcknight's Comments + Replies

But the lower bound of this is still well below one. We can't use our existence in the light cone to infer there's at least about one per light cone. There can be arbitrarily many empty light cones.

They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here

1DanArmak
We can take it as a calculation for 'number of technological civilizations in our past light cone', whose messages we could receive.
0The_Jaded_One
I think it is safe to say that some process produces the kind of high-rationality person who makes the effort to sign up, and that process (genetics/culture/scifi etc) is relatively slowly changing over the timescale of ~ 1 decade. In my opinion, there is almost certainly a much larger population of people who are one or two "steps" or "increments" away from signing up. For example, people who are pretty rich, philosophically materialistic, well educated and into the sci-fi scene, but they have only heard a passing mention of cryo, and it "sounds like a scam" to them, because they still think that cryo means literally freezing your head and thawing it out like a freezer bag of strawberries and expecting it to work again. Cryo needs three things to open the floodgates to this crowd in my opinion: (1) an actual demonstration of extracting real memories and personality from a cryopreserved dog/monkey, so that we conclusively know that the full cryonics process preserves information. (2) respectable scientific papers in high-status journals describing and analyzing said demonstration. These are the best weapon against naysayers, because if someone is saying that cryo is unscientific, or is a cult or a religion etc etc, they will look pretty silly when you start piling up papers from Nature in front of them that say it ain't so. (3) publicity in the right channels to get the message across. In a way, this is the easy bit once you have (1) and (2), because the kind of channels that you want to be in to reach your target demographic (e.g. Ars Technica, Scientific American, Slashdot, Reddit, etc etc) will eat this up like candy and report the shit out of it without you even asking.

Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)

A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.

Links are dead. Is there anywhere I can find your story now?

1Error
Links fixed, and I rebuilt the pdf/html pages to include changes I've made since. I don't know if I can really call it a draft anymore, but I'm still taking feedback. I never did get it posted anywhere. I plan to fix that this month as part of NaNoWriMo (along with a couple others), because right now I figure polishing and posting stuff I already have is a better idea than writing more.
amcknight390

Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.

I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.

It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?

From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build

... (read more)
amcknight120

Finally did it. I'd like exactly 7 karma please.

4[anonymous]
Done!

For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.

Do you have any plans to tackle the hu... (read more)

lukeprog140

Do you have any plans to tackle the humane values problem?

Yes. The next open problem description in Eliezer's writing queue is in this category.

0juliawise
No, but it does seem to be in a similar vein.

This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations

I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).

Value drift fits your constraints. Our ability to drift accelerates as enhancement technologies increase in power. If values drift substantially and in undesirable ways because of, e.g. peacock contests, (a) our values lose what control they currently have (b) could significantly lose utility because of the fragility of value (c) is not an extinction event (d) seems as easy to effect as x-risk reduction.

I can't figure out what you mean by:

Hiding animal suffering probably makes us "more ethical".

Do you mean that it just makes us appear more ethical?

gwern100

I think his point was that, assuming that vegetarianism is ethical, hiding the abbattoirs etc may wind up increasing vegetarianism rates because when a sheltered meat-eater runs into them, they may be shocked into vegetarianism; while if everyone grew up raising their own pet chicken and slaughtering it themselves, no one would be shocked and they would all shrug upon seeing any video or photos.

I'm not sure this is true, since as the population urbanizes and specializes under economic growth, regardless of hiding animal suffering, people will inevitably no... (read more)

One major difference is that you are talking about what to care about and Eliezer was talking about what to expect.

0someonewrongonthenet
I'm talking about expectation as well. If I'm about to make 100 copies of myself, I expect that 100 versions of myself will exist in the future. That's it. I'm currently not making copies of myself, so I expect exactly one future version of myself. It's nonsensical to talk about which one of those copies I'll end subjectively experiencing the world through in the future through. That's because subjective expectation about the future is an emotionally intuitive shorthand for the fact that we care about our future selves, not a description of reality.

What do you mean?

1[anonymous]
e means that e hopes I write more half-baked posts in the middle of the night. The latest seems to be doing well, too.

According to the PhilPapers survey results, 4.3% believe in idealism (i.e. Berkeley-style reality).

1Rob Bensinger
A lot of those probably aren't advocating subjective idealism. Kantian views ('transcendental idealism'), Platonistic views ('platonic idealism'), Russellian views ('phenomenalism'), and Hegelian views ('objective idealism') are frequently called 'idealism' too. In the early 20th century, 'idealism' sometimes degraded to such an extent that it seemed to mean little more than the declaration 'I hate scientism and I think minds are interesting and weird'. It's also worth noting that Peter probably had analytic philosophy in mind when he said 'Anglophone'. Most of the idealists in the survey are probably in the continental or historically Kantian tradition.

This seems to me like a major spot where the dualistic model of self-and-world gets introduced into reinforcement learning AI design (which leads to the Anvil Problem). It seems possible to model memory as part of the environment by simply adding I/O actions to the list of actions available to the agent. However, if you want to act upon something read, you either need to model this by having atomic read-and-if-X-do-Y actions, or you still need some minimal memory to store the previous item(s) read in.

Let's say you think a property, like 'purpose', is a two parameter function and someone else tells you it's a three parameter function. An interesting thing to do is to accept that it is a three parameter function and then ask yourself which of the following holds:
1) The third parameter is useless and however it varies, the output doesn't change.
2) There is a special input you've been assuming is the 'correct' input, which allowed you to treat the function as if it were a two parameter function.

MrMind is talking about an "oracle" in the sense of a mathematical tool. Oracles in this sense are are well-defined things that can do stuff traditional computers can't.

0Eugine_Nier
I'm perfectly aware what an oracle is. I was using it in the same sense.

This crossed my mind, but I thought there might be other deeper reasons.

where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form.

Can anyone say a bit more about why physical references would need to be described 'effectively'/computably? Is this based on the assumption that the physical universe must be computable?

0MrMind
I think because if they are described by an uncomputable procedure, one for example involving oracles or infinite resources, then they (with very high probability) would not be able to be computed by our brains.
0A1987dM
I'm jealous of E79 A83 N9 and Autism 15, too. (I also have O80, so that's OK. OTOH my IQ is higher than that.)

For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.

amcknight-30

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.

[This comment is no longer endorsed by its author]Reply

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.

1Caspar Oesterheld
Regarding impossibility results, there is now also Brian Tomasik's Three Types of Negative Utilitarianism. There are also these two attempted formalizations of notions of welfare: * Daswani and Leike (2015): A Definition of Happiness for Reinforcement Learning Agents. * Formalizing preference utilitarianism in physical world models, which I have written.
0lukeprog
Here's one.

I would be happy to be able to read Procrastination and the five-factor model: a facet level analysis ScienceDirect IngentaConnect (I'm not sure if adding these links helps you guys, but here they are anyways)

5gwern
The links help a bit. http://dl.dropbox.com/u/85192141/2001-watson.pdf

Quantum mechanics and Metaethics are what initially drew me to LessWrong. Without them, the Sequences aren't as amazingly impressive, interesting, and downright bold. As solid as the other content is, I don't think the Sequences would be as good without these somewhat more speculative parts. This content might even be what really gets people talking about the book.

5loup-vaillant
Maybe we could test that. Does LessWrong keep non-anonymous access logs? If so, we may be able to (approximately?) reconstruct access patterns over the weeks/months/years by unique user. We could know: * What are the first reads of newcomers? * What are typical orders of reading? * Does reading stops, when, and where? For instance, if we find that people that start by the quantum mechanic sequence tend to leave more often than the others, then it is probably a good idea to segregate it in a separate volume. It would at least signal that the author knows this is advanced or controversial.

Another group I recommend investigating that is working on x-risk reduction is the Global Catastrophic Risk Institute, which was founded in 2011 and has been ramping up substantially over the last few months. As far as I can tell they are attempting to fill a role that is different from SIAI and FHI by connecting with existing think tanks that are already thinking about GCR related subject matter. Check out their research page.

0Giles
Thanks - lukeprog gave us a list of xirsk orgs a while back, including GCRI, so I've pasted that into the minutes also (though I've made it clear we didn't discuss them all).

Churchland, Paul M., State-space Semantics and Meaning Holism in Philosophy and Phenomenological Research JStor Philosophy Documentation Center

4VincentYu
Here.

Problems with this approach have been discussed here.

Well it doesn't seem to be inconsistent with reality.

-1Jonathan_Elmer
The non-existence of unicorns makes the claim that they have legs, in whatever number, inconsistent with reality.
0DaFranker
It doesn't even have any referents to reality. It's not even a statement about whatever "reality" we live in, to the best of my knowledge. If it does mean five-leggedness of unicorn creatures with the implication of the existence of possible-existence of such creatures in reality, then it is false, but it's inconsistent with what we know of reality, since there's no way such a creature would exist. ...I think, anyway. Not quite sure about that second part.

I'm definitely having more trouble than I expected. Unicorns have 5 legs... does that count? You're making me doubt myself.

0Jonathan_Elmer
Cool. : ) Is "Unicorns have 5 legs" consistent with reality? I would be quite surprised to find out that it was.

I think this includes too much. It would includes meaningless beliefs. "Zork is Pork." True or false? Consistency seems to me to be, at best, a necessary condition, but not a sufficient one.

1Jonathan_Elmer
Could you give me an example of a belief that is consistent with reality but false?
0Peterdjones
Mutualy inconsistent statements can be consistent with known facts, eg Lady MacBeth had 2 chidren, Ldy MacBeth had 3 children...but that just exposes the problem with correspondence. if it isn't consistency...what is it?
0Larks
Better example, maybe: the continuum hypothesis
0Jonathan_Elmer
Tell me what Zork is and i'll let you know. : )
amcknight110

I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.

It seems to me that we can mean things in both ways once we are aware of the distinction.

what is anthropic information? what is indexical information? Is there a difference?

amcknight170

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

1wedrifid
Thankyou! I get tired of the whole "everybody thinks they are good" nonsense I hear all the time. I call it mind-projection. Some people just don't care.
3Decius
I think that most dictators who make it into history books think about benevolence differently than most people.

In the United States it's kind of neither. When you get an id card there is a yes/no checkbox you need to check.

In Probabilistic Graphical Modeling, the win probability you describe is called a Noisy OR.

amcknight130

The opponent is attacking you with a big army. You have a choice: you can let the attack through and lose in two turns, or you can send your creature out to die in your defense and lose in three turns. If you were trying to postpone losing, you would send out the creature. But you're more likely to actually win if you keep your forces alive ... [a]nd so you ask "how do I win?" to remind yourself of that.

This specific bit on it's own is probably quite fruitfully generalizable. You have so many heuristics and subgoals that, after holding them... (read more)

I wouldn't use wikipedia to get the gist of a philosophical view. At least to me, I find it to be way off a lot of the time, this time included. Sorry I don't have a clear definition for you right now though.

What is the risk from Human Evolution? Maybe I should just buy the book...

1Rain
It's well-written, though depressing, if you take "only black holes will remain in 10^45 years" as depressing news. Evolution is not a forward-looking algorithm, so humans could evolve in dangerous, retrograde ways, and thus extinct what we currently consider valuable about ourselves, or even the species itself should it become too dependent on current conditions.

It is often a useful contribution for someone to assess an argument without necessarily countering its points.

Not really.

It seems that optimization power as it's currently defined would be a value that doesn't change with time (unless the agent's preferences change with time). This might be fine depending what you're looking for, but the definition of optimization power that I'm looking for would allow an agent to gain or lose optimization power.

I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.

0Kaj_Sotala
Now that you mention it, I've heard that too, but can't remember a source either.
-4stcredzero
Perhaps the first superhuman AGI isn't tremendously superhuman, but smart enough to realize that humanity's goose would be cooked if it got any smarter or it started the exponential explosion of self-improving superhuman intelligence. So it proceeds to take over the world and rules it as an oppressive dictator to prevent this from happening. To preserve humanity, it proceeds to build colonizing starships operated by copies of itself which terraform and seed other planets with human life, which is ruled in such a fashion that society is kept frozen in something resembling "the dark ages," where science and technological industry exists but is disguised as fantasy magic.

I'm not an expert either. However, the OP function has nothing to do with ignorance or probabilities until you introduce them in the mixed states. It seems to me that this standard combining rule is not valid unless you're combining probabilities.

0Stuart_Armstrong
Hence OP is not an entropy.

If OP were an entropy, then we'd simply do a weighted sum 1/2(OP(X4)+OP(X7))=1/2(1+3)=2, and then add one extra bit of entropy to represent our (binary) uncertainty as to what state we were in, giving a total OP of 3.

I feel like you're doing something wrong here. You're mixing state distribution entropy with probability distribution entropy. If you introduce mixed states, shouldn't each mixed state be accounted for in the phase space that you calculate the entropy over?

1Stuart_Armstrong
If you down the "entropy is ignorance about the exact microstate" route, this makes perfect sense. And various people have made convincing sounding arguments that this is the right way to see entropy, though I'm not expert myself.
Load More