It looks like there's a good chance that it's going to rain tomorrow, so we will gather at the trainstation and decide based on the weather and the number of people that show up whether to go with the original plan or just go grabs some drinks in the city center.
We'll probably wait for about half an hour. If you are planning on coming and can't make it at 15:30, please let me know so we can wait for you/let you know where we are going.
If the thing your making exists and is this cheap then why is Pharma leaving the money on the floor and not mass producing this?
There are a number of costs that Moderna/Pfizer/Astrazenica incur that a homebrew vaccine does not. Of the top of my head:
1. Salaries for the (presumably highly educated) lab techs that put this stuff together. I don't know johnswentwort background, but presumably he wouldn't exactly be asking minimum wage if he was doing this commercially.
2. Costs of running large scale trials and going through all the paperwork to get FDA approv...
Would also prefer fewer twitter links.
You're not limited to one simulacrum level per unit of information. What you're describing is just combining level 1 (reasonable intervention) and level 2 (influencing others to wear a mask).
I honestly don't understand what that thing is, actually.
This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.
Suppose I successfully convinc...
There isn't an obvious question that, if we could just ask an Oracle AI, the world would be saved.
"How do I create a safe AGI?"
Edit: Or, more likely, "this is my design for an AGI, (how) will running this AGI result in situations that I would be horrified by if they occure?"
I don't think it is realistic to aim for no relevant knowledge getting lost even if your company loses half of its employees in one day. A bus factor of five is already shockingly competent when compared to any company I have ever worked for, going for a bus factor of 658 is just madness.
One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.
Agreed. Also not a Republican (or American, for that matter), but that was a bit off putting. To quote Eliezer himself:
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there's a standard problem: "All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?"
What on Earth was the point of choosing this as an example? To rouse the po...
Yeah, I was thinking about exactly the same quote. Is this what living in Bay Area for too long does to people?
How about using an example of a Democrat who insists that logic is colonialistic and oppressive; Aumann's agreement theorem is wrong because Aumann was a white male; and the AI should never consider itself smarter than an average human, because doing so would be sexist and racist (and obviously also islamophobic if the AI concludes that there are no gods). What arguments could Eliezer give to zir? For bonus points, consider that any part of the reply would be immediately taken out of context and shared on Twitter.
Okay, I'll stop here.
For the record, otherwise this is a great article!
Funding this Journal of High Standards wouldn't be a cheap project
So where is the money going to come from? You're talking about seeing this as a type of grant, but the amount of money available for grants and XPrize type organizations is finite and heavily competed for. How are you going to convince people that this is a better way of making scientific progress than the countless other options available?
> If you only get points for beating consensus predictions, then matching them will get you a 0.
Important note on this: Matching them guarantees a 0, implementing your own strategy and doing poorer than the consensus could easily get you negative marks.
Also teaching quality will be much worse if teachers are different people than those actually doing the work, a teacher who works with what he is teaching gets hours of feedback everyday on what works and what does not, a teacher who only teaches has no similar mechanism, so he will provide much less value to his students.
No objectsion to the rest of your post, but I'm with Elizer on this. Teaching is a skill that is entirely separate from whatever subject you are teaching and this skill also strongly influences the amount of value a teacher can ...
I read the source before reading the quote and was expecting a quote from The Flash.
Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.
Rationality isn't just about being skeptical, though, and there is something to be said for giving people the benefit of the doubt and engaging with them if they are willing to do so in an open manner. There are obviously limits to the extend to which you want to do so, but so far this thread has been an interesting read so I wouldn't worry to much about us wasting our time.
It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.
That depends entirely on your definition (which is the point of the quote I guess), I've heard people use it both ways.
Well, we're working on it, ok ;)
We obviously haven't left nature behind entirely (whatever that would mean), but we have at least escaped the situation Brady describes, where we are spending most of our time and energy searching for our next meal while preventing ourselves from becoming the next meal for something else.
The life for the average human in first world countries is definitely no longer only about eating and not dying.
Context: Brady is talking about a safari he took and the life the animals he saw were leading.
Brady: It really was very base, everything was about eating and not dying, pretty amazing.
Grey: Yeah, that is exactly what nature is, that's why we left.
-- Hello internet (link, animated)
Might be more anti-naturalist than strictly rationalist, but I think it still qualifies.
You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).
About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't...
Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million y...
To be fair, all interactions described happen after the AI has been terminated, which does put up an additional barrier for the AI to get out of the box. It would have to convince you to restart it without being able to react to your responses (apart from those it could predict in advance) and then it still has to convince you to let it out of the box.
Obviously, putting up additional barriers isn't the way to go and this particular barrier is not as impenetrable for the AI as it might seem to a human, but still, it couldn't hurt.
First off, I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that, I don't think that is the core of your argument, though, so let's assume that you can and that the resultant society is effectively a superintelligence now.
The problem with superintelligences is that they are smarter than you. It will realize that it is in a box and that you are going to turn it off eventually. Given that this society is based on natural selection it will want to prevent that. How will it accomplish that? I don...
I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)
But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agen...
That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computa...
That is no reason to fear change, "not every change is an improvement but every improvement is a change" and all that.
I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.
Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?
For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).
Similarly:
I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.
Ok, fair enough. I still hold that Sansa was more rational than Theon at this point, but that error is one that is definitely worth correcting.
Why is this a rationality quote? I mean sure it is technically true (for any situation you'll find yourself in), but that really shouldn't stop us from trying to improve the situation. Theon has basically given up all hope and is advocating compliance to a psychopath for fear of what he may do to you otherwise, doesn't sound particularly rational to me.
That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.
*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).
Fair enough, let me try to rephrase that without using the word friendliness:
We're trying to make a superintelligent AI that answers all of our questions accurately but does not otherwise influence the world and has no ulterior motives beyond correctly answering questions that we ask of it.
If we instead accidentally made an AI that decides that it is acceptable to (for instance) manipulate us into asking simpler question so that it can answer more of them, it is preferable that it doesn't believe anyone is listening to the answers it gives because that is...
False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.
Ah yes, that did it (and I think I have seen the line drawing before) but it still takes a serious conscious effort to see the old woman in either of those. Maybe some Freudian thing where my mind prefers looking at young girls over old women :P
For me, the pictures in the op stop being a man at around panel 6, going back they stop being a woman at around 4. I can flip your second example by unfocusing and refocusing my eyes, but in your first example I can't for the life of me see anything other than a young woman looking away from the camera (I'm amusing there is an old woman in there somewhere based on the image name).
Could you give a hint as to how to flip it? I'm assuming the ear turns into an eye or something, but I've been trying for about half an hour now and it is annoying the crap out of me.
(eg if accuracy is defined in terms of the reaction of people that read its output).
I'm mostly ignorant about AI design beyond what I picked up on this site, but could you explain why you would define accuracy in terms of how people react to the answers? There doesn't seem to be an obvious difference between how I react to information that is true or (unbeknownst to me) false. Is it just for training questions?
I'm not sure how much I agree with the whole "punishing correct behavior to avoid encouraging it" (how does the saintly person know that this is the right thing for him to do if it is wrong for others to follow his example), but I think the general point about tracking whose utility (or lives in this case) you are sacrificing is a good one.
Mild fear here, I can talk in groups of people just fine, but I get nervous before and during a presentation (something for which I have taken deliberate steps to get better at).
For me at least, the primary thing that helps is being comfortable with the subject matter. If I feel like I know what I'm talking about and I practiced what I am going to say it usually goes fine (it took some effort to get to this level, btw), but if I feel like I have to bluff my way through everything falls apart real fast. The number of people in the audience and how well I k...
Basically the ends don't justify the means (Among Humans). We are nowhere near smart enough to think those kinds of decisions (or any decisions really) through past all their consequences (and neither is Elon Musk).
It is possible that Musk is right and (in this specific case) it really is a net benefit to mankind to not take one minute to phrase something in a way that it is less hurtful, but in the history of mankind I would expect that the vast majority of people who believed this were actually just assholes trying to justify their behavior. And beside...
I'm still sad that there isn't a dictionary of numbers for Firefox, it sounds amazing but it isn't enough to make me switch to Chrome just for that.
I stand corrected, thank you.
I prefer the English translation, it's more direct, though it does lack the bit about avoiding your own mistakes.
A more literal translation for those that don't speak German:
Those that attempt to learn from their mistakes are idiots. I always try to learn from the mistakes of others and avoid making any myself.
Note: I'm not a German speaker, what I know of the language is from three years of high school classes taken over a decade ago, but I think this translation is more or less correct.
Moreover (according to a five minute wikipedia search), not all doctors swear the same oath, but the modern version of the Hippocratic oath does not have an explicit "Thou shalt not kill" provision and in fact, it doesn't even include the commonly quoted "First, cause no harm".
Obviously taking a person life, even with his/her consent, may violate the personal ethics of some people, but if that is the problem the obvious solution is to find a different doctor.
Thanks!
Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P
"You say that every man thinks himself to be on the good side, that every man who opposed you was deluding himself. Did you ever stop to consider that maybe you were the one on the wrong side?"
-- Vasher (from Warbreaker) explaining how that particular algorithm looks from the inside.
To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.
My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.
*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.
And conversely, some of the unusual-ness that can be attributed to IQ is only very indirectly caused by it. For instance, being able to work around some of the more common failure modes of the brain probably makes a significant portion of LessWrong more unusual than the average person and understanding most of the advice on this site requires at least some minimum level of mental processing power and ability to abstract.
The media very rarely lies