Comment author: vi21maobk9vp 05 January 2013 05:16:12PM 2 points [-]

Actually, in NBG you have explicitness of assumptions and of first-order logic — and at the same time axiom of induction is a single axiom.

Actually, if you care about cardinality, you need a well-specified set theory more than just axioms of reals. Second-order theory has a unique model, yes, but it has the notion of "all" subsets, so it just smuggles some set theory without specifying it. As I understand, this was the motivation for Henkin semantics.

And if you look for a set theory (explicit or implicit) for reals as used in physics, I am not even sur you want ZFC. For example, Solovay has shown that you can use a set theory where all sets of reals are measurable without much risk of contradictions. After all, unlimited axiom of choice is not that natural for physical intuition.

Comment author: Eliezer_Yudkowsky 05 January 2013 12:16:57PM 3 points [-]

So after reading that, I don't see how it could be true even in the sense described in the article without violating Well Foundation somehow, but what it literally says at the link is that every model of ZFC has an element which encodes a model of ZFC, not is a model of ZFC, which I suppose must make a difference somehow - in particular it must mean that we don't get A has an element B has an element C has an element D ... although I don't see yet why you couldn't construct that set using the model's model's model and so on. I am confused about this although the poster of the link certainly seems like a legitimate authority.

But yes, it's possible that the original paragraph is just false, and every model of ZFC contains a quoted model of ZFC. Maybe the pair-encoding of quoted models enables there to be an infinite descending sequence of submodels without there being an infinite descending sequence of ranks, the way that the even numbers can encode the numbers which contain the even numbers and so on indefinitely, and the reason why ZFC doesn't prove ZFC has a model is that some models have nonstandard axioms which the set modeling standard-ZFC doesn't entail. Anyone else want to weigh in on this before I edit? (PS upvote parent and great-grandparent.)

Comment author: vi21maobk9vp 05 January 2013 04:15:06PM 0 points [-]

Well, technically not every model of ZFC has a ZFC-modelling element. There is a model of "ZFC+¬Con(ZFC)", and no element of this monster can be a model of ZFC. Not even with nonstandard element-relation.

Comment author: niceguyanon 27 December 2012 03:34:05PM 1 point [-]

Why is it fun to be bad? I've heard actors say its more fun playing the bad guy. Also, I find the thought of stealing millions of dollars and getting away with it, thrilling, but not very nice.

Comment author: vi21maobk9vp 01 January 2013 12:22:07PM 1 point [-]

It may be that goal-orientation where there are no made-up rules is fun; as a good person there is need to follow some of the more stupid moral norms that made sense some puny two hundred years ago.

Comment author: FiftyTwo 31 December 2012 12:40:53AM *  0 points [-]

What are people's thoughts on the Sapir-Whorf hypothesis (nature of language affects how people think).

If it is true are there lessons for teaching rationality in different linguistic communities or modifying language to increase rationality?

Comment author: vi21maobk9vp 01 January 2013 12:13:12PM 1 point [-]

It seems that in weak formulations it can be confirmed.

Have you read "Through the Language Glass" by Deutscher?

Choosing better words for some situations does train you in some skills. It looks like people distinguish colours quicker if they have different names. For example, Russian speaker will notice the difference between "closer to sky blue" and "closer to navy blue" faster than English speaker because of habit to classify them as different colours. Deutscher cites a few different studies of that kind.

Apparently, language can also change your default reactions (how you interpret omissions) in the sense that you can set up some scene on the table, then lead a person to another room and ask which table has the same scene as in the first room; whether the language uses north/south or left/right for path descriptions can be seen by answers.

As for applications, it seems to say what you would try anyway — if you want to improve awareness of somethingm encourage saying it out loud every time.

Comment author: vi21maobk9vp 25 December 2012 06:20:01PM 1 point [-]

Actually, what you may wonder is whether utility of increased status just has a complex shape for you.

For example, I can imagine some situation of having too little status, but in most cases I get what is enough personally for me before even trying.

Comment author: Rick_from_Castify 25 December 2012 02:56:08AM *  7 points [-]

I guess I should have inserted in there that "We can't run a business and do CC-BY-SA". Of course we could use that license but then everyone would just share the recordings for free.

We are not trying to be greedy we just want to build a viable business that provides a valuable service. If you see a clever way to do that and still use the CC-BY-SA license then please let us know. We are still new and are willing to consider different business models.

Comment author: vi21maobk9vp 25 December 2012 07:28:29AM 8 points [-]

Actually, whatever license you use, your content will be copied around.

If you use a proprietary license after taking CC-BY core content, copying your content will be less legal and less immoral.

In response to comment by [deleted] on Open Thread, December 1-15, 2012
Comment author: Maelin 12 December 2012 03:45:25AM 3 points [-]

My father told me about someone he knew when he was working as a nurse at a mental hospital, who tried killing himself three times with a gun in the mouth. The first two times he used a pistol of some sort - both times, the bullet passed between the hemispheres of his brain (causing moderate but not fatal brain damage), exited through the back of his head, and all the hot gases from the gun cauterised the wounds.

The third time he used a shotgun, and that did the job. For firearm based suicide, I think above the ear is a safer bet.

Comment author: vi21maobk9vp 14 December 2012 06:40:56PM 0 points [-]

Pistol to the mouth seems to require full mouth of water for high chance of success.

Comment author: nigerweiss 09 December 2012 11:24:56PM 0 points [-]

Okay, sure, but here's the hitch:

Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I'm not sure the modern establishment of neurology has any good idea of how you'd do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It's going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.

Comment author: vi21maobk9vp 10 December 2012 10:33:39AM 0 points [-]

Everything you said is true.

Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.

But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.

Comment author: nigerweiss 08 December 2012 12:35:37AM 0 points [-]

It's actually worse than that. Humans do not scale well to more computing power. A good AI could expand the depth of its search trees, in principle, logarithmically with compute power (possibly a bit better with monte-carlo approaches). If you throw an AI ten times more processing power, it could, at the bare minimum, extend the depth or detail of its planning several times. The same is not true of human neurology. All an em can do with more processing power is run faster, which has limited value. A human can do things a chimp just can't, even if the chimp has a really long time to think about it. The human brain was not designed to scale with processing power, to run on a linear computer, or to be modular and improveable. De novo AI is just (probably) going to run circles around us.

Comment author: vi21maobk9vp 09 December 2012 05:56:35PM 0 points [-]

A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.

Comment author: DaFranker 07 December 2012 04:02:47PM *  2 points [-]

Learning programming takes years.*

* On average, for the average population.

Comment author: vi21maobk9vp 09 December 2012 05:51:23PM 0 points [-]

On the other hand, the better you are, the more things you learn just because they are easy enough to learn to be worth your time.

View more: Prev | Next