Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Soki00

Assuming that the effects of dieting for a day are very small, it is likely that the utility of not eating knots today is lower than the utility of eating them for every possible future behavior.
A CDT agent only decides what it does now, so a CDT agents chooses to eat knots.
But an EDT,TDT or UDT agent would choose to diet.

Soki10

Despite the fact that your audience is familiar with the singularity, I would still insist on the potential power of an AGI.
You could say something about the AI spreading on the Internet (from a 1000 to 1,000,000 time increase in processing power), bootstrapping nanotech and rewriting its source code, and that all of this could happen very quickly.

Ask them what they think such an AI would do, and if they show signs of anthropomorphism explain them that they are biased (mind projection fallacy for example).
You can also ask them what goal they would give to such an AI and show what kind of disaster may follow.
That can lead you to the complexity of wishes (a computer does not have common sens) and the complexity of human values.

I would also choose a nice set of links to lesswrong.org and singinst.com that they could read after the presentation.

It would be great if you could give us some feedback after your presentation : What worked, what did they find odd, what was their reaction, what questions did they ask.

Soki00

It may not be what wedrifid meant, but does Omega always appear after you see the result on the calculator?
Does Omega always ask :
"Consider the counterfactual where the calculator displayed opposite_of_what_you_saw instead of what_you_saw" ?

If that is true, then I guess it means that what Omega replaces your answer with on the test sheet in the worlds where you see "even" is the answer you write on the counterfactual test sheet in the worlds where you see "odd". And the same with "even" and "odd" exchanged.

Soki60

When I hear a bad argument, it feels like listening to music and hearing a wrong note.
In one case it is the logical causality that is broken, in the other the interval between notes.
Actually it is worse because a pianist usually goes back on track.

Soki20

Ask yourself what are the thrilling aspects of what you want to prove. Look for what you cannot explain, but feel is true.

I want to write a proof.

Before writing, you should be satisfied with your understanding of the problem. Try to find holes in it, as if you were a teacher reading some student work.

You should also ask yourself why you want to write a correct proof, and remember that a proof that is wrong is not a proof.

Soki10

I think that you should finish this sequence on lesswrong.
It is less technical and easier to understand than other posts on Decision Theory, and would therefore be valuable for newcomers.

Soki20

I support this idea.

But what about copyright issues? What if posts and comments are owned by their writer?

Soki40

knb, does your nephew know about lesswrong, rationality and the Singularity? I guess I would have enjoyed reading such a website when I was a teenager.

When it comes to a physical book, Engines of Creation by Drexler can be a good way to introduce him to nanotechnology and what science can make happen. (I know that nanotech is far less important that FAI, but I think it is more "visual" : you can imagine those nanobots manufacturing stuff or curing diseases, while you cannot imagine a hard takeoff).
Teenagers need dream.

Soki80

I just made a small calculation :

The number of deaths in the US is about 2.5 million per year.
The cost of cryonics is about $30000 per "patient" with the Cryonics Institute.
So if everyone wanted to be frozen, it would cost 75 billion dollars a year, about 0.5% of the US GDP, or 3% of the healthcare spending.
This neglects the economies of scales which could greatly reduce the price.

So even with a low probability of success, cryonics seems to be a good choice.

Load More