All of Crazy philosopher's Comments + Replies

We also hate hate and love love

An other problem with authors calculs of potential to improve intelligence: let's suppose, there is a problem in the human brain that reduces IQ by 10 points, and it can be solved by Gene1 or Gene2. Let's suppose that 99% of humans do not have either Gene1 or Gene2. In this case, the author's method would show that if we added both Gene1 and Gene2 to the same person, their IQ would increase by 20 points.

3GeneSmith
I don't understand your question

Eliezer Yudkowsky is trying to prevent the creation of recursively self-improved AGI because he doesn't want competitors.

So if one day you decided that P of X ≈ 1, you would remember "it's true but I'm not sure" after one year?

2Donald Hobson
If I only have 1 bit of memory space, and the probabilities I am remembering are uniformly distributed from 0 to 1, then the best I can do is remember if the chance is > 1/2.  And then a year later, all I know is that the chance is >1/2, but otherwise uniform. So average value is 3/4. The limited memory does imply lower performance than unlimited memory.   And yes, when was in a pub quiz, I was going "I think it's this option, but I'm not sure" quite a lot. 

Coral should to try to be a white hacker for Mr. Topaz company. Mr. Topaz would agree, because Coral say, that if she didn't success she don't take money, so he lose nothing. After few times, when Coral hacked all drons software in one hour after presentation of its new version, mr. Topaz would understand, that security is important.

Can you tell us what exactly led to "something" explosion? Does something change in your life before?

Our discussion look like:

Me: we can do X, that mean do X1, X2 and X3.

You: we can fall on X2 by way Y.

Do you mean "we should to think about Y before realize plan X" or "plan X definitely fall because of Y"?

 

A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?

2ChristianKl
Basically, you are saying "we can do X and I hope it will do A, B and C" without any regard for the real world consequences. Will likely go down as engaging in politics is mind-killing and it's important to think clearly to achieve AI alignment. 

To summarize our discussion:
There may be a way to get the right government action and greatly improve our chances of alignment. But it requires a number of actions, some of which may have never been done by our society before. They may be impossible.
These actions include: 1: learning how to effectively change people's minds by videos (maybe something bordering on dark epistemology); 2: convincing tens of percent of the population of the right memes about alignment by social media (primarily youtube); 3: changing the minds of interlocutors in political deba... (read more)

2ChristianKl
It's not at all clear that if you convince someone on a superficial level that they should care about AI alignment, that will result in the right actions. On the other hand, thinking on that level can be quite corrosive for your own understanding. The soldier mindset is not useful for thinking about efficient mechanisms. 

I agree that there are pitfalls, and it will take several attempts for the laws to start working.

If the US government allocates a significant amount of money for (good) AI alignment research in combination with the ban, then our chances will increase from 0% to 25% in a scenario without black swans.

2ChristianKl
The problem is not whether a law works but whether it does what's needed. If you look at the laws that exist in our society they usually do something but at the same time they don't solve problems completely.  Politicians are quite quick to pass a law to "do something" but that does not mean that the problem is solved effectively. The more political the debate it is, the less likely it often is that the law actually does what it is indented to do.

The problem is that we don't know what regulations we need to actually achieve the goal. 

Will it work to ban all research to increase AI capabilities except those that bring us closer to alignment? Also ban the creation of AI systems with a capacity greater than X, with a gradual decrease in X.

There are many ways to increase the number of AI alignment researchers that then lead to those focusing on questions like algorithmic gender and race bias without actually making progress on the key problem.

The idea is to create videos fully describing the goals of AGI alignment, so viewers would understand the context.
 

2ChristianKl
"Will it work?" is a question where we don't really know the answer. As far as "ban all research to increase AI capabilities except those that bring us closer to alignment" goes, that's not something you can write into a law. A law needs a mechanism. It needs definitions about what research is allowed and what isn't.  Also laws by their nature only affect a country. 

I don't understand the specific mechanism that makes us need rest days. I don't see gears.

So even if politicians make regulation we need and increase number of AI alignment researchers it doesn't increase our chances a lot?

Why?

2ChristianKl
The problem is that we don't know what regulations we need to actually achieve the goal.  There are many ways to increase the number of AI alignment researchers that then lead to those focusing on questions like algorithmic gender and race bias without actually making progress on the key problem.

If videos convince random people, then they will convince a certain number of politicians and AI developers.

If enough people are convinced of the need for AGI alignment, politicians will start promoting AGI alignment in order to get votes.

If we do videos well, the regulations of AI development will be introduced. If we do videos really well, the government can directly allocate money for research on alignment.

Spreading this idea will increase the number of our resources (more peoples will work on it).

2ChristianKl
All of those things can happen and the result is still that AI kills humanity. While all things are equal more resources and people is nice, it alone is not solving alignment. Reality does not grade based on the amount of effort you put in. 

It doesn't work that way for me.

For example, when I repeat litany of Tarski, I think "I really really really want to know the truth about this whatever it is, and, I hope, biases will not stop me". When I try get to know a person, I 1. create a question (feeling active curiosity about person in general); 2. ask it (feeling active curiosity about this question); 3. Go back to the 1-st point;

Even if I haven't a concrete question, I often have a lot of desire to improve my map. It's so for me, because one time I read "truth let us achieve our objective and ma... (read more)

Thanks I would note it

If someone is working on this, they are probably not going to reply here. But, ignoring the difficulty of the task, it is not sure whether doing so would actually improve our chances. On one hand, yeah, humanity could get a few extra years to figure out alignment. On the other hand, I am afraid that the debate around alignment would be utterly poisoned; for most people, the word "alignment" would start to mean "a dangerous terrorist". So during those extra years there probably wouldn't be a lot of alignment research done.


OK, it was too radical. But what's ... (read more)

3Viliam
If the humanity is already about to extinct, the chips were already produced, and you need to (also) blow up the data centers.

My factual disagreement:

I suppose people are already doing this?

So do it more instead of writing articles "How Spend Last 5 Years Of LIfe".

That was kinda the original plan of Less Wrong, which in hindsight probably seems too optimistic. (Even Putin expected three days to take over Ukraine.)

Continuing this plan is better than nothing (than accept defeat). And... good joke.

Something like MIRI?

MIRI is working on the direct alignment only, isn't?

Different tasks require different levels of talent. Compared to saving the world, creating a successful startup is t

... (read more)
  1. I mostly agree with you.
  2. Thanks for information about rational youtube channels and other. I have update myself.
  3. In fact, even if someone already do this, I wrote this article to say "it is too early to capitulate". Even if we have small chances to surviving, we should to do something, not write articles like MIRI announces new "Death With Dignity" strategy and accept the defeat. Because if you accept defeat, you would do nothing after and you won't increase our chances to surviving (and you would if you don't accept).

Basically, the answer to "why aren't peo

... (read more)

That's why I was so impressed to see cousin_it propose what I think is an even better solution on the Less Wrong thread on the matter:

Or you can write a cheque to your opponent for half of the winning amount in exchange for the fact that he will cooperate, and you will defect. It won't make sense for him to defect.

I understand. My question is, can I publish an article about this so that only MIRI guys can read it, or send in Eliezer e-mail, or something.

2Tapatakt
Gretta Duleba is MIRI's Communication Manager. I think she is the person you should ask who write to.

I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?

1Tapatakt
Everyone who is trying to create GAI is trying to create aligned GAI. But they think it will be easy (in the sense "not very super hard so they will probably fail and create misaligned one"), otherwise they wouldn't try in the first place. So, I think, you should not share your info with them.

For a joke to be funny, you need a "wow effect" where the reader quickly connect together few evidences. But- go on! I'm sure you can do it!

This is a good philosophical exercise- can you define "humor" to make a good joke

The probability of the existence of the whole universe is much less than the existence of a single brain, so most likely we are an Eliezer dream.

Guessing the Teacher's Password: Eliezer?

To modulate the actions of the evil genius in the book, Eliezer imagines that he is evil.

I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?

2Ruby
I'd ask in the Open Thread rather than here. I don't know of a canonical answer but would be good if someone wrote one.

Moloch’s Army

We worship what brings success. Therefore, crime bosses worship power, philosophers from lesswrong worship intelligence, and middle managers worship Moloch. And just as we are ready to be curious, even if it is not optimal in this case, and to persuade others to be curious, middle managers will spread the "cult of Moloch". The same psychological mechanism.

The fascist project was an attempt to turn national politics into a maze. Fascism consists in creating the most competitive state possible, and so that individual parts of the nation do not fight with each other, taking away resources that can be directed to fight with other nations. This is literally everything that fascism has. And indeed, at first, the fascist states, which directed most of their economy to the army (competitiveness), won, and began to form alliances with each other in order to fight together against those who did not worship Moloch. Ma... (read more)

Professional sport is a maze in the sense that there is a huge competition there, and if you want to reach the level of professional sports, you will have to sacrifice all the health and personal time that will be required.

Let me reformulate this essay in one paragraph:

Glomarization is good, but sometimes we can't use it because others don't understand the principle of Glomarization, or because you have too many counterfactual selves, and some of them won't like just telling the truth. Therefore, when you are asked about Jews in the attic, it is acceptable to lie, but when you are asked if you would lie about Jews in the attic, you must ALWAYS tell the truth. So meta honesty is just a way to use glomarization as often as you want.

So it shouldn’t be surprising if acting like you have more status than I assign to you triggers a negative emotion, a slapdown response.

I think there's a different mechanism here. I don't like it if Mr. A can't do X, but doesn't know about it, publicly announces that he's going to do X, and gets a lot of prestige upfront. At the same time, I understand that he will not succeed, and he should not get prestige. And after that, A fails, and it makes me feel worse about those who claim that they can do X if they have no experience. 

Imagine that some philo... (read more)

Sometimes, maybe you don't have time for friends to let you know. You're living an hour away from a wildfire that's spreading fast. And the difference between escaping alive and asphyxiating is having trained to notice and act on the small note of discord as the thoughts flicker by:

"Huh, weird."

Our civilization lives an hour away from a dozen metaphorical fires, some of which no living person has seriously thought about

We have a lot of people showing up, saying "I want to help." And the problem is, the thing we most need help with is figuring out what to do. We need people with breadth and depth of understanding, who can look at the big picture and figure out what needs doing

Figure out how best to spread rationality, or at least ideas about X-risks. This is quite possible with our resources equal to zero, but if we can spread these ideas to, for example, 20% of the population, it will greatly help us with the fight against X-risks. In addition, we will have more people who will help us... to think about what we should to do, lol

1) "I think we call this "taxes"."

So I invented taxes for charitable donations.

2) The second option is better for most participants, but not for everyone, you are right

This is a very useful article that helped me understand many things about myself and society. Thanks!

This is a very useful article that helped me understand many things about myself and society. Thanks!

Okay I'll rewrite the post. Thanks for your answers

That's true, but Program B will still be worse than a human-written program, so we aim to avoid spaghetti towers.

Spaghetti towers work especially poorly in changing environments: if evolution were reasonable, it would force us to try to maximize the number of our genes in the next generation. But instead, she created several heuristics like hunger and desire for groin friction. So when people came up with civilization, we started eating fast food and having sex with condoms.

People with the simulacra level 4th can praise their political allies.

I'm talking about doing an audit of your whole life regularly, desperately trying to find the most effective things. Also, this technique is about highlighting potentially the most effective actions that you didn't spend a lot of time thinking about, but put them down as "stupid" because, for example, you need to get out of your comfort zone. 

Does it clear?

1papetoast
that is much clearer that I think you should have said it out loud in the post

I think that makes you what you pretend to be to protect yourself from occlumency. That's why Harry fell into a coma - he pretended to be a stone.

1papetoast
This is like raw, n=1, personal feedback. No, not really. I read it twice but couldn't bring myself to care. It seems you are going into tangents and not actually talking directly about your technique. I could be wrong, but I also couldn't care enough to read into the sentences and understand what you're actually pointing at with all the words. Having conclusion is nice because I jumped straight to that at first, seems kind of too normal to justify the clickbait though. Overall I feel like I read some ramblings and didn't learn much.

I don't see the difference. The theory of relativity and Newton's theory also have different philosophies: Newton's theory states that gravity is a force, that the universe is constant and eternal, etc.

Newton's theory is not exactly a special case of the theory of relativity, because it is less accurate.

Edit: I have a lot of disagreements on my commentary. Can you explain, why are you disagree?

Jeff Bezos may announce that he will pay 5,000,000,000 to whoever invents a cure for cancer. Or, rather, to give out a monetary reward for every step towards curing cancer. Thus, if you have an idea how to cure such and such a type of cancer, you take out a loan at high interest rates (because it is risky), and conduct research. 

He can form a fund that will determine which research brings us closer to cancer treatment: after all, Nobel prizes work well.

Before reading this chain, I had an intuitive sense of the "bottlenecks" of production, but this chain allowed me to understand it much better. Thank you!

""I think America has better values than Pakistan does, but that doesn’t mean I want us invading them, let alone razing their culture to the ground and replacing it with our own" - why not? No, seriously. America invaded several Muslim (fundamentalist Muslim, not we-kinda-like-Quran-stop-accusing-us-of-ISIS Muslim) countries already anyway. Why not raze the fundamentalist culture to the ground and replace it with universal?"

Preserving their culture is part of their utility function. Destroying their culture just like that is not ethical for the same reason... (read more)

It seems to me that people who represent the "naive graph" by intelligence mean "the possibility of achieving goals", and the "Eliezer graph" means by intelligence the total computing power or something like that. Thus, a function that takes a value from the "Eliezer graph" and returns where this point stands on the "naive graph" is hyperexponential.

I mean, we're simplifying reality down to Bayesian networks and scenario trees. And it works. It seems that we can say that the universe is Bayesian.

7TAG
Bayesianism works up to a point. Frequentism works up to a point. Various other things work. You haven't shown that frequentism doesn't work, or that frequentism and bayesianism are mutually exclusive.

what exactly do users lose and receive karma for?

2habryka
Karma is just the sum of votes from other users on your posts, comments and wiki-edit contributions.
Load More