I was able to get this one on my first try:
---
Prompt: I am teaching my son a simple logic problem. "All blargs are schminky," I explain. "But not all schminkies are blarg."
GPT-3: "What's the difference?" he asks, confused. "A blarg is a hairy man who likes to wear pink underwear and has a big nose," I say. "And a schminky is an ugly person with no sense of fashion or hygiene."
---
Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is co...
A few plausible limited abilities that could provide decisive first move advantages:
One way to employ Space Mom might be with how confidently you believe expert concensus, in particular given that experts rarely give their confidence levels. For instance:
A. Expert concensus says that horoscopes are bunk. I believe it! I have a tight confidence interval on that.
B. Expert concensus says that hospitals provide significant value. I believe that too! But thanks to Robin Hanson, I'm less confident in it. Maybe we're mostly wasting our healthcare dollars? Probably not, but I'll keep that door in my mind open.
----
Separately, I thi...
Isn't this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more y...
Yeah, this isn't obviously wrong from where I'm standing:
"the rules of science aren't strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what's socially demanded of them, then science would progress a lot faster"
But it's imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn't trivial. (In my own experience, pe...
Similar to some of the other ideas, but here are my framings:
Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enoug
I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.
On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.
Here are a couple ideas for how to handle this:
B
It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.
The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, es...
If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money? Donors can't evaluate their stuff themselves, and MIRI doesn't seem to submit a lot of stuff to peer review.
How do you know they aren't just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.
The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.
Just because MIRI researchers' incentives aren't distorted by "publish or perish" culture, it doesn't mean they aren't distorted by other things, especially those that are associated with lack of feedback and accountability.
Ever since I started hanging out on LW and working on UDT-ish math, I've been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer's attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.
After reading su3su2u1's post, I feel that growing closer to academia is another obviously good step. It'll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?
You might want to examine what sort of in-group out-group dynamics are at play here, as well as some related issues. I know I run into these things frequently--I find the best defense mechanism for me is to try to examine the root of where feelings come from originally, and why certain ideas are so threatening.
Some questions that you can ask yourself:
Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.
I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.
META: I'd like to suggest having a separate thread for each publication. These attract far more interest than any other threads, and after the first 24 hours the top comments are set and there's little new discussion.
There aren't very many threads posted in discussion these days, so it's not like there is other good content that will be crowded out by one new thread every 1-3 days.
Quirrel seems on the road to get the Philosopher's Stone. It's certainly possible that he will fail or Harry ( / time-turned Cedric Diggory) will manage to swipe it at the last minute. But with around 80k words left to go, there doesn't seem to be a whole lot of story left if Harry gets the stone in the next couple of chapters.
I draw your attention to a few quotes concerning the Philosopher's Stone:
...His strongest road to life is the Philosopher’s Stone, which Flamel assures me that not even Voldemort could create on his own; by that road he would rise gre
The great vacation sounds to me like it ends with me being killed and another version of me being recognized. I realize that these issues of consciousness and continuity are far from settled, but at this point that's my best guess. Incidentally, if anyone thinks there's a solid argument explaining what does and doesn't count as "me" and why, I'd be interested to hear it. Maybe there's a way to dissolve the question?
In any event, I wasn't able to easily choose between one or the other. Wireheading sounds pretty good to me.
RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.
This doesn't really tell us a lot about how people predict others' success. The information has been intentionally limited to a very high degree. It's basically asking the test participants "This individual usually scores an 87. What do you expect her to score next time?" All of the interactions that could potentially create bias has been artificially stripped away by the experiment.
This means that participants are forced by the experimental setup to use Outside View, when they could easily be fooled into taking the Inside View and being swayed b...
I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/
Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.
Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.
Does anyone have experience with Inositol? It was mentioned recently on one of the better parts of the website no one should ever go to, and I just picked up a bottle of it. It seems like it might help with pretty much anything and doesn't have any downsides . . . which makes me a bit suspicious.
In some sense I think General Intelligence may contain Rationality. We're just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.
A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.
We can certainly point to people who are extremely intelligent but quite irrational in some respects--but if you increased the...
I suppose if you really can't stand the main character, there's not much point in reading the thing.
I was somewhat aggravated by the first few chapters, in particular the conversation between Harry and McGonagall about the medical kit. Was that one where you had your aggravated reaction?
I found myself sympathizing with both sides, and wishing Harry would just shut up--and then catching myself and thinking "but he's completely right. And how can he back down on this when lives are potentially at stake, just to make her feel better?"
I would go even further and point out how Harry's arrogance is good for the story. Here's my approach to this critique:
"You're absolutely right that Harry!HPMOR is arrogant and condescending. It is a clear character flaw, and repeatedly gets in the way of his success. As part of a work of fiction, this is exactly how things should be. All people have flaws, and a story with a character with not flaws wouldn't be interesting to read!
Harry suffers significantly due to this trait, which is precisely what a good author does with their characters.
Later on...
Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.
In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, ...
Though note that an insurance may regardless be useful if you have self-control problems with regard to money. If you've paid your yearly insurance payment, the money is spent and will protect you for the rest of the year. If you instead put the money in a rainy day fund, there may be a constant temptation to dip into that fund even for things that aren't actual emergencies.
Of course, that money being permanently spent and not being available for other purposes does have its downsides, too.
In the publishing industry, it is emphatically not the case that you can sell millions of books from a random unknown author with a major marketing campaign. It's nearly impossible to replicate that success even with an amazing book!
For all its flaws (and it has many), Fifty Shades had something that the market was ready for. Literary financial successes like this happen only a couple times a decade.
Isn't that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can't be salvaged.
Once you've steelmanned, there should still be something that you disagree with. Otherwise you're not steelmanning, you're just making an argument you believe in.
Part of the point of steelmanning, as I understand it, is to see whether there is a bit that can't be salvaged. If you correct the unnecessary flaws and find that the strengthened argument is actually correct (and, ostensibly, change your mind), it seems appropriate to still call that process steelmanning. Or rather, even if it's not appropriate, people seem to keep using it like that anyway.
If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.
If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.
That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order
The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.
I'm not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider "write fizzbuzz from a description" to be a basic task of intelligence because it is for humans. But humans are the most complicate...
It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course.
I will suppress urges to eat in order to have the optimal expe...
I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.
That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.
Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.
There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.
Spending ...
Interesting. Wouldn't Score Voting strongly incentivize voters to put 0s for major candidates other than their chosen one? It seems like there would always be a tension between voting strategically and voting honestly.
Delegable proxy is definitely a cool one. It probably does presuppose either a small population or advanced technology to run at scale. For my purposes (fiction) I could probably work around that somehow. It would definitely lead to a lot of drama with constantly shifting loyalties.
Are there any methods for selecting important public officials from large populations that are arguably much better than the current standards as practiced in various modern democracies?
For instance in actual vote tallying like Condorcet seem to have huge advantages over simple plurality or runoff systems, and yet it is rarely used. Are there similar big gains to be made in the systems that leads up to a vote, or avoids one entirely?
For instance, a couple ideas:
I can certainly imagine a universe where none of these concepts would be useful in predicting anything, and so they would never evolve in the "mind" of whatever entity inhabits it.
Can you actually imagine or describe one? I intellectually can accept that they might exist, but I don't know that my mind is capable of imagining a universe which could not be simulated on a Turing Machine.
The way that I define Tegmark's Ultimate Ensemble is as the set of all worlds that can be simulated by a Turing Machine. Is it possible to imagine in any concrete...
There certainly needs to be some way to moderate out things that are unhelpful to the discussion. The question is who decides and how do they enforce that decision.
Other rationalist communities are able to discuss those issues without exploding. I assume that Alexander/Yvain is running Slate Star Codex as a benevolent dictatorship, which is why he can discuss hot button topics without everything exploding. Also, he doesn't have an organizational reputation to protect--LessWrong reflects directly on MIRI.
I agree in principle that the suggestion to simply di...
I am afraid it would incentivize people to post controversial comments.
I'm not convinced that's a bad thing. It certainly would help avoid groupthink or forced conformity. And if someone gets upvoted for posting controversial argument A, then someone can respond and get even more votes for explaining the logic behind not-A.
So, what is your opinion on neoreaction, pick up artists, human biodiversity, capitalism, and feminism?
Just joking, please don't answer! The idea is that in a debate system without downvotes this is the thread where strong opinions would get many upvotes... and many people frustrated that they can can't downvote anymore, so instead they would write a reply in the opposite direction, which would also get many upvotes.
We wouldn't have groupthink and conformity. Instead, we would have factions and mindkilling. It could be fun at the beginning, but after a few months we would probably notice that we are debating the same things over and over.
The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.
I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it's highly likely to do so unintentionally.