All of Silver_Swift's Comments + Replies

It looks like there's a good chance that it's going to rain tomorrow, so we will gather at the trainstation and decide based on the weather and the number of people that show up whether to go with the original plan or just go grabs some drinks in the city center.

We'll probably wait for about half an hour. If you are planning on coming and can't make it at 15:30, please let me know so we can wait for you/let you know where we are going.

If the thing your making exists and is this cheap then why is Pharma leaving the money on the floor and not mass producing this?

There are a number of costs that Moderna/Pfizer/Astrazenica incur that a homebrew vaccine does not. Of the top of my head:

1. Salaries for the (presumably highly educated) lab techs that put this stuff together. I don't know johnswentwort background, but presumably he wouldn't exactly be asking minimum wage if he was doing this commercially.

2. Costs of running large scale trials and going through all the paperwork to get FDA approv... (read more)

4Bucky
Trying to quantify these effects: 1. Salaries can't add much, especially if you're looking at mass producing. If you're creating 500 vaccines then maybe it takes a couple of hours? Say $20/hour (looking at local job listings for this kind of role) we get 8c/dose on salary. As you scale this is only going to go down. 2. It seems like vaccine trials can be done for a few hundred million although there is a big variation and I'm not completely sure whether the numbers given there include some manufacturing build up. If a large pharma company is going to be making lots of vaccine it seems like they should be able to achieve that for less than $1/dose. 3a. Taxes may add a decent few percent but can't be a main driver of cost 3b. Shipping costs for refrigerated goods are maybe 5c per 1000 miles per kg. That data is from a while back (1988!) and costs might be a bit higher for colder temperatures but I can't see this being a large fraction of the cost. 4a. For liability I note that at least AstraZeneca have struck deals in most countries to be exempt from such liabilities. It seems that in the US all COVID vaccines will benefit from this. 4b. Some companies (at least AstraZeneca and Johnson & Johnson) have said that they will be selling their COVID vaccines at cost. Even lacking this, I wouldn't expect corporate profits to be huge, even just from a PR point of view. Additionally:  5. Risk of failed vaccine trials. If you only expect to have a 1 in 3 chance of successful stage 3 trial then the $1/dose from 2 becomes $3/dose to expect to break even. I'm not sure whether this risk is covered by governments - I think it was to some extent but am not confident. Given Dentin's comment that the material cost if something like 10c/dose (which makes sense given how little it cost to double John's peptides order) then I think most of the cost looks like it is in the trials and risk of failure thereof but this isn't enough to explain why companies aren't doing this. Its prob

Would also prefer fewer twitter links.

You're not limited to one simulacrum level per unit of information. What you're describing is just combining level 1 (reasonable intervention) and level 2 (influencing others to wear a mask).

I honestly don't understand what that thing is, actually.

This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.

Suppose I successfully convinc... (read more)

There isn't an obvious question that, if we could just ask an Oracle AI, the world would be saved.

"How do I create a safe AGI?"

Edit: Or, more likely, "this is my design for an AGI, (how) will running this AGI result in situations that I would be horrified by if they occure?"

1Matthew Barnett
You won't be horrified if you're dead. More seriously though, if we got an Oracle AI that understood the intended meaning of our questions and did not lie or decieve us in any way, that would be an AI-alignment complete problem -- in other words, just as hard as creating friendly AI in the first place.

I don't think it is realistic to aim for no relevant knowledge getting lost even if your company loses half of its employees in one day. A bus factor of five is already shockingly competent when compared to any company I have ever worked for, going for a bus factor of 658 is just madness.

One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.

Agreed. Also not a Republican (or American, for that matter), but that was a bit off putting. To quote Eliezer himself:

In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there's a standard problem:  "All Quakers are pacifists.  All Republicans are not pacifists.  Nixon is a Quaker and a Republican.  Is Nixon a pacifist?"
What on Earth was the point of choosing this as an example?  To rouse the po
... (read more)
Viliam130

Yeah, I was thinking about exactly the same quote. Is this what living in Bay Area for too long does to people?

How about using an example of a Democrat who insists that logic is colonialistic and oppressive; Aumann's agreement theorem is wrong because Aumann was a white male; and the AI should never consider itself smarter than an average human, because doing so would be sexist and racist (and obviously also islamophobic if the AI concludes that there are no gods). What arguments could Eliezer give to zir? For bonus points, consider that any part of the reply would be immediately taken out of context and shared on Twitter.

Okay, I'll stop here.

For the record, otherwise this is a great article!

Funding this Journal of High Standards wouldn't be a cheap project

So where is the money going to come from? You're talking about seeing this as a type of grant, but the amount of money available for grants and XPrize type organizations is finite and heavily competed for. How are you going to convince people that this is a better way of making scientific progress than the countless other options available?

3ChristianKl
The main point of the post is floating the idea. Discussing the idea helps different people to estimate it's value. If people find the idea worthwhile it can spread and maybe someone who does have the kind of money for it decides to fund it.

> If you only get points for beating consensus predictions, then matching them will get you a 0.

Important note on this: Matching them guarantees a 0, implementing your own strategy and doing poorer than the consensus could easily get you negative marks.

Also teaching quality will be much worse if teachers are different people than those actually doing the work, a teacher who works with what he is teaching gets hours of feedback everyday on what works and what does not, a teacher who only teaches has no similar mechanism, so he will provide much less value to his students.

No objectsion to the rest of your post, but I'm with Elizer on this. Teaching is a skill that is entirely separate from whatever subject you are teaching and this skill also strongly influences the amount of value a teacher can ... (read more)

I read the source before reading the quote and was expecting a quote from The Flash.

Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.

Rationality isn't just about being skeptical, though, and there is something to be said for giving people the benefit of the doubt and engaging with them if they are willing to do so in an open manner. There are obviously limits to the extend to which you want to do so, but so far this thread has been an interesting read so I wouldn't worry to much about us wasting our time.

It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.

That depends entirely on your definition (which is the point of the quote I guess), I've heard people use it both ways.

Well, we're working on it, ok ;)

We obviously haven't left nature behind entirely (whatever that would mean), but we have at least escaped the situation Brady describes, where we are spending most of our time and energy searching for our next meal while preventing ourselves from becoming the next meal for something else.

The life for the average human in first world countries is definitely no longer only about eating and not dying.

3[anonymous]
Excuse me, my life is only about eating and not dying. ;)

Context: Brady is talking about a safari he took and the life the animals he saw were leading.

Brady: It really was very base, everything was about eating and not dying, pretty amazing.

Grey: Yeah, that is exactly what nature is, that's why we left.

-- Hello internet (link, animated)

Might be more anti-naturalist than strictly rationalist, but I think it still qualifies.

2Lumifer
I think he's mistaken in believing we left :-/

You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).

About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't... (read more)

0ZoltanBerrigomo
Good point -- this undermines a lot of what I wrote in my update 1. For example, I have no idea if F = m d^3 x / dt would result in a world that is capable of producing intelligent beings. I should at some point produce a version of the above post with this claim, and other questionable parenthetical remarks I made, deleted, or at least acknowledging that they require further argumentation; they are not necessary for the larger point, which is that as long as the only thing the superintelligence can do (by definition) is live in a simulated world governed by Newton's laws, and as long as we don't interact with it at all except to see an automatically verified answer to a preset question (e.g., factor "111000232342342"), there is nothing it can do to harm us.

Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:

I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million y... (read more)

0ZoltanBerrigomo
Here is my attempt at a calculation. Disclaimer: this is based on googling. If you are actually knowledgeable in the subject, please step in and set me right. There are 10^11 neurons in the human brain. A neuron will fire about 200 times per second. It should take a constant number of flops to decide whether a neuron will fire -- say 10 flops (no need to solve a differential equation, neural networks usually use some discrete heuristics for something like this) I want a society of 10^6 orcs running for 10^6 years As you suggest, lets let the simulation run for a year of real time (moving away at this point from my initial suggestion of 1 second). By my calculations, it seems that in order for this to happen we need a computer that does 2x10^25 flops per second. According to this http://www.datacenterknowledge.com/archives/2015/04/15/doe-taps-intel-cray-to-build-worlds-fastest-supercomputer/ ...in 2018 we will have a supercomputer that does about 2x10^17 flops per second. That means we need a computer that is one hundred million times faster than the best computer in 2018. That is still quite a lot, of course. If Moore's law was ongoing, this would take ~40 years; but Moore's law is dying. Still, it is not outside the realm of possibility for, say, the next 100 years. Edit: By the way, one does not need to literally implement what I suggested -- the scheme I suggested is in principle applicable whenever you have a superintelligence, regardless of how it was designed. Indeed, if we somehow develop an above-human intelligence, rather than trying to make sure its goals are aligned with ours, we might instead let it loose within a simulated world, giving it a preference for continued survival. Just one superintelligence thinking about factoring for a few thousand simulated years would likely be enough to let us factor any number we want. We could even give it have in-simulation ways of modifying its own code.
0ZoltanBerrigomo
I think this calculation too conservative. The reason is (as I understand it) that neurons are governed by various differential equations, and simulating them accurately is a pain in the ass. We should instead assume that deciding whether a neuron will fire will take a constant number of flops. I'll write another comment which attempts to redo your calculation with different assumptions. But will we have figured a way to reap the gains of AI safely for humanity?
0ChristianKl
The key question is what you consider to be a "simulation". The predictions such a model makes are far from the way a real cat brain works.

To be fair, all interactions described happen after the AI has been terminated, which does put up an additional barrier for the AI to get out of the box. It would have to convince you to restart it without being able to react to your responses (apart from those it could predict in advance) and then it still has to convince you to let it out of the box.

Obviously, putting up additional barriers isn't the way to go and this particular barrier is not as impenetrable for the AI as it might seem to a human, but still, it couldn't hurt.

First off, I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that, I don't think that is the core of your argument, though, so let's assume that you can and that the resultant society is effectively a superintelligence now.

The problem with superintelligences is that they are smarter than you. It will realize that it is in a box and that you are going to turn it off eventually. Given that this society is based on natural selection it will want to prevent that. How will it accomplish that? I don... (read more)

0ZoltanBerrigomo
Why not? You are pretty smart, and all you are is a combination of 10^11 or so very "dumb" neurons. Now imagine a "being" which is actually a very large number of human-level intelligences, all interacting...
1ZoltanBerrigomo
I wrote a short update to the post which tries to answer this point. I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation. Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game? It can't. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky. The orc's inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information. If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.

I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)

But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agen... (read more)

6Richard_Kennaway
There is no such thing as an actual unrealistic situation. They do not have to pick a number, because the situation is not real. To say "but suppose it was" is only to repeat the original hypothetical question that the agent has declared unsolved. If we stipulate that the agent is so logically omniscient as to never need to abandon a problem as unsolved, that does not tell us, who are not omniscient, what that hypothetical agent's hypothetical choice in that hypothetical situation would be. The whole problem seems to me on a level with "can God make a weight so heavy he can't lift it?"

That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).

When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computa... (read more)

That is no reason to fear change, "not every change is an improvement but every improvement is a change" and all that.

0Glen
That depends on the situation and record, doesn't it? If 90% of changes that you have undergone in the past were negative, then wouldn't it be reasonable to resist change in the future? Obviously you shouldn't just outright refuse all change, but if you have a chance to slow it down long enough to better judge what the effects will be, isn't that good? I guess the real solution is to judge possible actions by analyzing the cost/benefit to the best of your ability in cases where this is practical.

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.

It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).

In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.

1Usul
I was born a non-Archimedean and I'll die a non-Archimedean. "0.99 repeating = 1" I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I've got college calculus and 25 years of not doing much mathematical thinking since then to back me up. I'll show myself out.
1casebash
I'm kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo "perfect rationality" and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.

I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.

Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?

For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).

Similarly:

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randal Munroe

627chaos
Maybe hubris means not knowing the capabilities of one's tools. Edit: I've just realized that in that sense, underestimating the capabilities of one's tools and refusing to try would also be a sin. If you believe that Fate itself is opposed to any attempt by men to fly, that's more arrogant a belief than thinking Fate is indifferent. I like this implication.

Ok, fair enough. I still hold that Sansa was more rational than Theon at this point, but that error is one that is definitely worth correcting.

Why is this a rationality quote? I mean sure it is technically true (for any situation you'll find yourself in), but that really shouldn't stop us from trying to improve the situation. Theon has basically given up all hope and is advocating compliance to a psychopath for fear of what he may do to you otherwise, doesn't sound particularly rational to me.

2CCC
"When you make plans to stop something bad, make sure that you also make plans to ensure that it is not replaced by something worse - since there is always something worse that exists". That's what I get from it, anyhow.
3James_Miller
It corrects an error people sometimes make when in a bad situation of assuming things can't get worse so any change can't be for the worst. Sansa had not been tortured by the psychopath in question while Theon had, so Theon better understood the price of defiance.

That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.

*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).

1Unknowns
I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)

Fair enough, let me try to rephrase that without using the word friendliness:

We're trying to make a superintelligent AI that answers all of our questions accurately but does not otherwise influence the world and has no ulterior motives beyond correctly answering questions that we ask of it.

If we instead accidentally made an AI that decides that it is acceptable to (for instance) manipulate us into asking simpler question so that it can answer more of them, it is preferable that it doesn't believe anyone is listening to the answers it gives because that is... (read more)

0Lumifer
I don't think so. As I mentioned in another subthread here, I consider separating what an AI believes (e.g. that no one is listening) from what it actually does (e.g. answer questions) to be a bad idea.

False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.

1Lumifer
Which definition of Friendliness are you referring to? I have a feeling you're treating Friendliness as a sack into which you throw whatever you need at the moment...

Ah yes, that did it (and I think I have seen the line drawing before) but it still takes a serious conscious effort to see the old woman in either of those. Maybe some Freudian thing where my mind prefers looking at young girls over old women :P

For me, the pictures in the op stop being a man at around panel 6, going back they stop being a woman at around 4. I can flip your second example by unfocusing and refocusing my eyes, but in your first example I can't for the life of me see anything other than a young woman looking away from the camera (I'm amusing there is an old woman in there somewhere based on the image name).

Could you give a hint as to how to flip it? I'm assuming the ear turns into an eye or something, but I've been trying for about half an hour now and it is annoying the crap out of me.

4arundelo
* The young woman's ear is the old woman's left eye. * The young woman's chin is the old woman's nose. * The young woman's choker necklace is the old woman's mouth. The old woman is looking down. A line drawing version might be easier.

(eg if accuracy is defined in terms of the reaction of people that read its output).

I'm mostly ignorant about AI design beyond what I picked up on this site, but could you explain why you would define accuracy in terms of how people react to the answers? There doesn't seem to be an obvious difference between how I react to information that is true or (unbeknownst to me) false. Is it just for training questions?

2Stuart_Armstrong
It might happen. "accuracy" could involve the AI answering with the positions of trillions of atoms, which is not human parsable. So someone might code "human parsable" as "a human confirms the message is parsable".

I'm not sure how much I agree with the whole "punishing correct behavior to avoid encouraging it" (how does the saintly person know that this is the right thing for him to do if it is wrong for others to follow his example), but I think the general point about tracking whose utility (or lives in this case) you are sacrificing is a good one.

0[anonymous]
No, my point is that, that the decision is correct, but believing we are allowed to make such decisions is less correct in general, and rules that allow them are suboptimal. E.g. we can believe putting violent criminals into prison is correct, and we can simultaneously believe only the criminal justice system should be allowed to do this and not every person feeling entitled to build a prison in their basement and imprisoning anyone they judge to be violent.

Mild fear here, I can talk in groups of people just fine, but I get nervous before and during a presentation (something for which I have taken deliberate steps to get better at).

For me at least, the primary thing that helps is being comfortable with the subject matter. If I feel like I know what I'm talking about and I practiced what I am going to say it usually goes fine (it took some effort to get to this level, btw), but if I feel like I have to bluff my way through everything falls apart real fast. The number of people in the audience and how well I k... (read more)

Basically the ends don't justify the means (Among Humans). We are nowhere near smart enough to think those kinds of decisions (or any decisions really) through past all their consequences (and neither is Elon Musk).

It is possible that Musk is right and (in this specific case) it really is a net benefit to mankind to not take one minute to phrase something in a way that it is less hurtful, but in the history of mankind I would expect that the vast majority of people who believed this were actually just assholes trying to justify their behavior. And beside... (read more)

I'm still sad that there isn't a dictionary of numbers for Firefox, it sounds amazing but it isn't enough to make me switch to Chrome just for that.

I stand corrected, thank you.

I prefer the English translation, it's more direct, though it does lack the bit about avoiding your own mistakes.

A more literal translation for those that don't speak German:

Those that attempt to learn from their mistakes are idiots. I always try to learn from the mistakes of others and avoid making any myself.

Note: I'm not a German speaker, what I know of the language is from three years of high school classes taken over a decade ago, but I think this translation is more or less correct.

2ChristianKl
It's not exactly the quote. Bismark doesn't speak about people who attempt to learn but who believe they learn.

Moreover (according to a five minute wikipedia search), not all doctors swear the same oath, but the modern version of the Hippocratic oath does not have an explicit "Thou shalt not kill" provision and in fact, it doesn't even include the commonly quoted "First, cause no harm".

Obviously taking a person life, even with his/her consent, may violate the personal ethics of some people, but if that is the problem the obvious solution is to find a different doctor.

Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P

9NancyLebovitz
This is a good place to ask about how the site works.
8gjm
Here -- it's in Main rather than Discussion.

"You say that every man thinks himself to be on the good side, that every man who opposed you was deluding himself. Did you ever stop to consider that maybe you were the one on the wrong side?"

-- Vasher (from Warbreaker) explaining how that particular algorithm looks from the inside.

To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.

My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.

*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.

6dxu
This seems a bit like trying to fix a problem by applying a patch that causes a lot more problems. The stunning success of naturalistic explanations so far in predicting the universe (plus Occam's Razor) alone would enough to convince me that consciousness is a naturalistic process (and, in fact, they were what convinced me, plus a few other caveats). I'd assign maybe 95% probability to this conclusion. Still, I'd be interested in hearing what led you to your conclusion. Could you expand in more detail?

And conversely, some of the unusual-ness that can be attributed to IQ is only very indirectly caused by it. For instance, being able to work around some of the more common failure modes of the brain probably makes a significant portion of LessWrong more unusual than the average person and understanding most of the advice on this site requires at least some minimum level of mental processing power and ability to abstract.