All of simpleton's Comments + Replies

simpleton-10

Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.

I don't think Turing-completeness implies that.

Consider the similar statement: "If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created." This is clearly false: Although it's possible to write an AI for such a machine, the right selection pressures don't ex... (read more)

0[anonymous]
That is not the same thing at all, though.
8benelliott
If AI is possible in Life then a sufficiently large random field will almost certainly contain one. Whether it will have enough of an advantage to beat the simple self replicators and crystalline growing patterns for dominance of the entire field is another question.

Stephenson remains one of my favorites, even though I failed at several attempts to enjoy his Baroque Cycle series. Anathem is as good as his pre-Baroque-Cycle work.

2Normal_Anomaly
I have not attempted the Baroque Cycle. I enjoyed Anathem, and am hoping to read Cryptonomicon next.
simpleton270

Poor kid. He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.

His theories as quoted in the article don't seem to be very coherent -- I can't even tell if he's using the term "big bang" to mean the origin of the universe or a nova -- so I don't think there's much of a claim to be evaluated here.

Of course, it's very possible that the re... (read more)

4[anonymous]
I agree with this, but I'd bet this kid would be willing to drop his pet theory if he found it was wrong (if grudgingly). I really don't think this one article, or just being in the news mostly for his youth/intelligence combo will ruin him.
4XiXiDu
Reminds me of this old article (04.19.01) about Yudkowsky:

For those of you who watch Breaking Bad, the disaster at the end of Season 3 probably wouldn't have happened if the US adopted a similar system.

When I saw that episode, my first thought was that it would be extraordinarily unlikely in the US, no matter how badly ATC messed up. TCAS has turned mid-air collisions between airliners into an almost nonexistent type of accident.

0Aurini
After writing that I thought "Actually, it probably still would have happened, because it's such a great plot element." Rule of Cool.

This does happen a lot among retail investors, and people don't think about the reversal test nearly often enough.

There's a closely related bias which could be called the Sunk Gain Fallacy: I know people who believe that if you buy a stock and it doubles in value, you should immediately sell half of it (regardless of your estimate of its future prospects), because "that way you're gambling with someone else's money". These same people use mottos like "Nobody ever lost money taking a profit!" to justify grossly expected-value-destroyin... (read more)

2Unnamed
That's called the house money effect (from Thaler & Johnson, 1990).

It's common in certain types of polemic. People hold (or claim to hold) beliefs to signal group affiliation, and the more outlandishly improbable the beliefs become, the more effective they are as a signal.

It becomes a competition: Whoever professes beliefs which most strain credibility is the most loyal.

2[anonymous]
I think that most people who tell pollsters they believe conspiracy theories wouldn't bet on them.

Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.

0nhamann
Oh, I misunderstood your link. I agree, that's a good summary of the idea behind the "complexity of value" hypothesis.
0nhamann
I do not understand your point. Would you care to explain?

Argh. I'd actually been thinking about getting a 23andme test for the last week or so but was put off by the price. I saw this about 20 minutes too late (it apparently ended at midnight UTC).

In practice, you can rarely use GPLed software libraries for development unless you work for a nonprofit.

That's a gross overgeneralization.

1Paul Crowley
That seems to overstate it rather - it's a generalisation, but it's mostly true. Most software written for for-profit employers isn't GPL, and is often distributed even if only to a client or to other employees, so can't link to GPLed libraries directly. Still that's a long way from saying you can't use Cygwin at work. Sebastian Hagen's comment seems accurate to me.

Yes.

The things Shalmanese is labeling "reason" and "evidence" seem to closely correspond to what have been previously been called the inside view and outside view, respectively (both of which are modes of reasoning, under the more common definition).

6RobinHanson
Yes, that was going to be my comment. The outside view also uses "reason" but with wider and shallower chains of reasoning. The inside view is more fragile, requiring more assumptions and longer chains of reasoning.

MWI completely fails if any such non-linearities are present, while other theories can handle them. [...] It can collapse with one experiment, and I'm not betting against such experiment happening in my lifetime at odds higher than 10:1.

So you're saying MWI tells us what to anticipate more specifically (and therefore makes itself more falsifiable) than the alternatives, and that's a point against it?

0taw
It's the point against certainty about MWI, not against MWI. If we go down to 200th decimal place and find perfect linearity, it would be weak evidence for MWI (because other interpretations are fairly agnostic about it).
5steven0461
The possibility of future evidence against some hypothesis isn't evidence against that hypothesis. It also isn't evidence for that hypothesis. The only experiments that count are the ones that have actually been done.
simpleton100

And the best workaround you can come up with is to walk away from the money entirely? I don't buy it.

If you go through life acting as if your akrasia is so immutable that you have to walk away from huge wins like this, you're selling yourself short.

Even if you're right about yourself, you can just keep $1000 [edit: make that $3334, so as to have a higher expected value than a sure $500] and give the rest away before you have time to change your mind. Or put the whole million in an irrevocable trust. These aren't even the good ideas; they're just the trivial ones which are better than what you're suggesting.

Being aware of that tendency should make it possible to avoid ruination without forgoing the money entirely (e.g. by investing it wisely and not spending down the principal on any radical lifestyle changes, or even by giving all of it away to some worthy cause).

1Alicorn
Unless there's akrasia involved. I can only imagine how tempting it would be to just outright buy a house if I were suddenly handed a million dollars, no matter how sternly I told myself not to just outright buy a house.

Well, I wouldn't rule out any of:

1) I and the AI are the only real optimization processes in the universe.

2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than "I" do).

3) The concept of personal identity is unsalvageably confused.

simpleton170

If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals?

That's not anthropomorphization.

Can you teach me to talk to the stray cat in my neighborhood?

Sorry, you're too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.

All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.

Good point; "Children are sane" belongs somewhere high on the list.

simpleton160

I would believe that human cognition is much, much simpler than it feels from the inside -- that there are no deep algorithms, and it's all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.

I would believe that there's no way to define "sentience" (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.

I would believe in solipsism.

I can hardly think of any political, economic, or moral assertion I'd regard as implausible, except that one of the world's extant religions is true (since that would have about as much internal consistency as "2 + 2 = 3").

5billswift
You're confusing sentience and sapience. All other mammals are almost certainly sentient; it's sapience they generally (or completely) lack.
Alicorn120

Solipsism? Isn't there some contradiction inherent in believing in solipsism because someone else tells you that you should?

The actual quote didn't contain the word "beat" at all. It was "Count be wrong, they fuck you up."

The fact that we find ourselves in a world which has not ended is not evidence.

1timtyler
Er, I wasn't citing the existence of the world as evidence, rather pointing to the extended period of time which it has persisted for - which is relevant evidence.

lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).

I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.

Quite a bit is known about the neurology behind face recognition. No one understands the algorithm well enough to build a fusiform gyrus from scratch, but that doesn't mean the fact that there is an algorithm is mysterious.

1conchis
Even if we did not have any understanding of the neurology, I'm not sure why pointing to an empirical record of successful face recognition shouldn't be fairly convincing. Is the point that we could be lying about our record? (In the specific example given, you could probably get a fair bit of mileage from explaining the nature of vision, even without the specifics of face-recognition. I'm not really sure what broader lesson that might have though, as I don't fully understand the nature of the question you're asking.)
0Cyan
Thanks muchly.

Thanks, it looks like I misremembered -- if they're now doing perfusion after neuroseparation then it's much more likely to be compatible with organ donation.

I've sent Alcor a question about this.

This is the only reason I haven't signed up.

What I want to do is sign up for neuropreservation and donate any organs and tissues from the neck down, but as far as I can tell that's not even remotely feasible. Alcor's procedure involves cooling the whole body to 0C and injecting the cryoprotectant before removing the head (and I can understand why perfusion would be a lot easier while the head is still attached). Also, I think it's doubtful that the cryonics team and the transplant team would coordinate with each other effectively, even if there were no technical obstacles.

Are we developing a new art of akrasia-fighting, or is this just repackaged garden-variety self-help?

Edit: I don't mean to disparage anyone's efforts to improve themselves. (My only objection to the field of "self-help" is that it's dominated by charlatans.) But there is an existing body of science here, and I fear that if we go down this road the Art of Rationality will turn into nothing more than amateur behavioral psychology.

1MrShaggy
I think that this would be a weak community if going down that road would turn into a non-scientific amateur fest, which is how I understand Simpleton's concern.
GaryWolf100

Simpleton - your comment struck me as right on target except I would give this a positive value rather than a negative one. A lot of self help takes the form of akrasia-fighting; the question of course is whether it works. Amateur behavioral psychology would be one of the tools for separating effective from ineffective akrasia-fighting, yes?

The word amateur could perhaps use some re-valuing, especially in this context. The amateur, the non-professional, the person who wants to solve this problem for the personal benefit of enhancing his or her own decisio... (read more)

2JulianMorrison
This was more along the lines of "a kick in the pants" plus anecdotal evidence gathering. Advancement and usage should happen together, for obvious reasons.
simpleton200

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it's practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence - that "crossing the Pacific" isn't impossible, but it's going to take a different level of effort.

If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle's Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle's Law and that ... (read more)

Alcor says they have a >50% incidence of poor cases.

I strongly second the idea of using real science as a test. Jeffreyssai wouldn't be satisfied with feeding his students -- even the beginners -- artificial puzzles all day. Artificial puzzles are shallow.

It wouldn't even have to be historical science. Science is still young enough that there's a lot of low-hanging fruit. I don't think we have a shortage of scientific questions which are genuinely unanswered, but can be recognized as answerable in a moderate amount of time by a beginner or intermediate student.

simpleton250

There's a heuristic at work here which isn't completely unreasonable.

I buy $15 items on a daily basis. If I form a habit of ignoring a $5 savings on such purchases, I'll be wasting a significant fraction of my income. I buy $125 items rarely enough that I can give myself permission to splurge and avoid the drive across town.

The percentage does matter -- it's a proxy for the rate at which the savings add up.

It's also a proxy for the importance of the savings relative to other considerations, which are often proportional to the value of what you're buying.... (read more)

3PaulG
It seems to me like it shouldn't matter how often you buy the $15 items, technically. Even if you always bought $125 items and never bought $15 items, your heuristic still wouldn't be completely irrational. If you only buy $125 items, you'll only be able to buy 4% more stuff with your income, as compared to 33% more stuff if you always buy $15 items.
5Nick_Tarleton
But if the time to drive across town is worth more than $5 in the $125 case, it's worth more than $5 in the $15 case, and forming that habit loses big. (Unless driving across town once allows you to save on more than one item, but that completely breaks the example.) Other than cognitive cost, I don't see any reason to speak in terms of habits rather than case-by-case judgments here. In the car case, you know the cost of walking away is very high; this screens off the informational value of the price.