All of Gavin's Comments + Replies

The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.

I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it's highly likely to do so unintentionally.

Answer by Gavin-10

I was able to get this one on my first try:

---

Prompt: I am teaching my son a simple logic problem. "All blargs are schminky," I explain. "But not all schminkies are blarg."

GPT-3: "What's the difference?" he asks, confused. "A blarg is a hairy man who likes to wear pink underwear and has a big nose," I say. "And a schminky is an ugly person with no sense of fashion or hygiene."

---

Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is co... (read more)

A few plausible limited abilities that could provide decisive first move advantages:

  • The ability remotely take control of any networked computer
  • The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we're currently seeing.
  • The ability to reliably market price movements

One way to employ Space Mom might be with how confidently you believe expert concensus, in particular given that experts rarely give their confidence levels. For instance:

A. Expert concensus says that horoscopes are bunk. I believe it! I have a tight confidence interval on that.

B. Expert concensus says that hospitals provide significant value. I believe that too! But thanks to Robin Hanson, I'm less confident in it. Maybe we're mostly wasting our healthcare dollars? Probably not, but I'll keep that door in my mind open.

----

Separately, I thi... (read more)

Isn't this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.

Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more y... (read more)

Yeah, this isn't obviously wrong from where I'm standing:

"the rules of science aren't strict enough and if scientists just cared enough to actually make an effort and try to solve the problem, rather than being happy to meet the low bar of what's socially demanded of them, then science would progress a lot faster"

But it's imprecise. Eliezer is saying that the amount of extra individual effort, rationality, creative institution redesign, etc. to yield significant outperformance isn't trivial. (In my own experience, pe... (read more)

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enoug

... (read more)
-1RedErin
But it is unethical to allow all the suffering that occurs on our planet.
1James_Miller
Evolution should favor species that have expansion as a terminal value.

My real solution was not to own a car at all. Feel free to discount my advice appropriately!

I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.

On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.

Here are a couple ideas for how to handle this:

  1. B

... (read more)
0drethelin
If you really hate repairs, doesn't it make much more sense just to lease yourself?

It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.

The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, es... (read more)

If MIRI doesn't publish reasonably frequently (via peer review), how do you know they aren't wasting donor money? Donors can't evaluate their stuff themselves, and MIRI doesn't seem to submit a lot of stuff to peer review.

How do you know they aren't just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review -- e.g. other people previously declared to have produced good things declare that MIRI produces good things.

-1passive_fist
One dictionary definition of academia is "the environment or community concerned with the pursuit of research, education, and scholarship." By this definition MIRI is already part of academia. It's just a separate academic island with tenuous links to the broader academic mainland. MIRI is a research organization. If you maintain that it is outside of academia then you have to explain what exactly makes it different, and why it should be immune to the pressures of publishing. Low-quality publications don't get accepted and published. I know of no universities that would rather have a lot of third-rate publications than a small number of Nature publications. I'll agree with you that things like impact factor aren't good metrics but that's somewhat missing the point here.
3Viliam
Isn't it "cultish" to assume that an organization could do anything better than the high-status Academia? :P Because many people seem to worry about publishing, I would probably treat it as another form of PR. PR is something that is not your main reason to exist, but you do in anyway, to survive socially. Maximizing the academic article production seems to fit here: it is not MIRI's goal, but it would help to get MIRI accepted (or maybe not) and it would be good for advertising. Therefore, AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person. The job of the person would be to maximize MIRI-related academic articles, without making it too costly for the organization. One possible method that didn't require even five minutes of thinking: Find smart university students who are interested in MIRI's work but want to stay in academia. Invite them to MIRI's workshops, make them familiar with what MIRI is doing but doesn't care about publishing. Then offer them to become co-authors by taking the ideas, polishing them, and getting them published in academic journals. MIRI gets publications, the students get a new partially explored topic to write about; win/win. Also known as "division of labor".

The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.

Just because MIRI researchers' incentives aren't distorted by "publish or perish" culture, it doesn't mean they aren't distorted by other things, especially those that are associated with lack of feedback and accountability.

7[anonymous]
I think there's definitely not enough thought given to this, especially when they say one of the main constraints is getting interested researchers.

Ever since I started hanging out on LW and working on UDT-ish math, I've been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer's attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.

After reading su3su2u1's post, I feel that growing closer to academia is another obviously good step. It'll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?

You might want to examine what sort of in-group out-group dynamics are at play here, as well as some related issues. I know I run into these things frequently--I find the best defense mechanism for me is to try to examine the root of where feelings come from originally, and why certain ideas are so threatening.

Some questions that you can ask yourself:

  1. Are these claims (or their claimants) subtly implying that I am in a group of "the bad guys"?
  2. Is part of my identity wrapped up in the things that these claims are against?
  3. Do I have a gut instinct
... (read more)

Geothermal or similar cooling requires a pretty significant capital investment in order to work. My guess is that a basic air conditioning unit is a cheaper and simpler fix in most cases.

The problem is that even that fix may be out of the reach of many residents of Karachi.

3Daniel_Burfoot
By "people" I meant governments, companies or NGOs. Sure a basic AC unit is cheaper for one person, but it seems plausible that a piping system like the one I described would be a cheaper way to cool a large area. Note that AC will cool one person's house, but contributes a net heating effect to the city.

Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.

I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.

It would be very unusual indeed if the element distributions over optimal computronium exactly matched that of typical solar system.

But if it were not the optimal computronium, but the easiest to build computroniom, it would be made up of whatever was available in the local area.

0jacob_cannell
Yes - and that is related to my point - the configuration will depend on the matter in the system and the options at hand, and the best development paths are unlikely to turn all of the matter into computronium.

META: I'd like to suggest having a separate thread for each publication. These attract far more interest than any other threads, and after the first 24 hours the top comments are set and there's little new discussion.

There aren't very many threads posted in discussion these days, so it's not like there is other good content that will be crowded out by one new thread every 1-3 days.

0Dreaded_Anomaly
You can change the comment sort to "new" instead of "top", below the tags at the bottom of the original post.

Quirrel seems on the road to get the Philosopher's Stone. It's certainly possible that he will fail or Harry ( / time-turned Cedric Diggory) will manage to swipe it at the last minute. But with around 80k words left to go, there doesn't seem to be a whole lot of story left if Harry gets the stone in the next couple of chapters.

I draw your attention to a few quotes concerning the Philosopher's Stone:

His strongest road to life is the Philosopher’s Stone, which Flamel assures me that not even Voldemort could create on his own; by that road he would rise gre

... (read more)

Apparently Professors can cast memory charms without setting off the wards.

The great vacation sounds to me like it ends with me being killed and another version of me being recognized. I realize that these issues of consciousness and continuity are far from settled, but at this point that's my best guess. Incidentally, if anyone thinks there's a solid argument explaining what does and doesn't count as "me" and why, I'd be interested to hear it. Maybe there's a way to dissolve the question?

In any event, I wasn't able to easily choose between one or the other. Wireheading sounds pretty good to me.

RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.

This doesn't really tell us a lot about how people predict others' success. The information has been intentionally limited to a very high degree. It's basically asking the test participants "This individual usually scores an 87. What do you expect her to score next time?" All of the interactions that could potentially create bias has been artificially stripped away by the experiment.

This means that participants are forced by the experimental setup to use Outside View, when they could easily be fooled into taking the Inside View and being swayed b... (read more)

1Vulture
But in a trilemma you can only get one, not two, right?

I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/

Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.

Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.

1Gunnar_Zarncke
I have seen this some time ago when it was mentioned on slashdot. By now there should be lots of nice videos illustrating those on YouTube. One is this. What I really like about this is that it allows to gain conflict-free intuitions about QM via macroscopic processes. See also De Broglie–Bohm theory. I do not see a clear reason why MWI must be preferred. For me the deciding point is which can (better) be generalized relativistically. Apparently there are bohmian mechanic-based approaches ADDED: The latter article contains the interesting conclusion: "if Bohmian mechanics indeed cannot be made relativistic, it seems likely that quantum mechanics can’t either".
2pragmatist
Bohmian mechanics and the Many Worlds interpretation make identical predictions (at least, as long as we ignore anthropic considerations). I haven't yet read the article, but if it is claiming that this experiment is some sort of vindication of Bohmian mechanics, then I suspect it is wrong.

The most standard business tradeoff is Cheap vs Fast vs Good, which typically you're only supposed to be able to get two of.

5Andy_McKenzie
Yeah I find these three pronged trade-offs fairly interesting. I think it's wrong to say "choose two"; for example, you could always choose to be somewhere in the middle if you consider the space to be a triangle. Do you know of the word for a three pronged trade-off?

Does anyone have experience with Inositol? It was mentioned recently on one of the better parts of the website no one should ever go to, and I just picked up a bottle of it. It seems like it might help with pretty much anything and doesn't have any downsides . . . which makes me a bit suspicious.

In some sense I think General Intelligence may contain Rationality. We're just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.

A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.

We can certainly point to people who are extremely intelligent but quite irrational in some respects--but if you increased the... (read more)

I suppose if you really can't stand the main character, there's not much point in reading the thing.

I was somewhat aggravated by the first few chapters, in particular the conversation between Harry and McGonagall about the medical kit. Was that one where you had your aggravated reaction?

I found myself sympathizing with both sides, and wishing Harry would just shut up--and then catching myself and thinking "but he's completely right. And how can he back down on this when lives are potentially at stake, just to make her feel better?"

3tetronian2
Yes, I did find that section grating. I'm describing my emotions post-hoc here (which is not generally reliable), but what I found irritating about the first few chapters was the fact that Harry acts in an extremely arrogant way, nearly to the point of coercing the adult characters, and the story-universe appears to back him up at every turn. This is the "Atlas Shrugged" effect described downthread by CellBioGuy. Harry is probably right in most of those arguments, but he is only so effortlessly correct and competent because the story-universe is designed so that he can teach these rationality lessons to other characters. It feels like the world is unfair, and unfair in the favor of an unlikeable and arrogant character. There is a real-world corollary of this, of course--very arrogant people who always get what they want--and I suspect my emotional reactions to these real and fictional situations are very similar. (I have since caught up with the rest of the fic, and enjoyed most of it.)

I would go even further and point out how Harry's arrogance is good for the story. Here's my approach to this critique:

"You're absolutely right that Harry!HPMOR is arrogant and condescending. It is a clear character flaw, and repeatedly gets in the way of his success. As part of a work of fiction, this is exactly how things should be. All people have flaws, and a story with a character with not flaws wouldn't be interesting to read!

Harry suffers significantly due to this trait, which is precisely what a good author does with their characters.

Later on... (read more)

8tetronian2
While this is true, there can be a distinction between a character with flaws and a character who is extremely irritating to read about. And this is one of those judgement calls where The Audience is Always Right; it seems very reasonable to stop reading a story if the protagonist noticeably irritates you. In general, commentary to the effect of "you should like this thing" is not very useful, especially if you are trying to figure out why someone reacted negatively. (These discussions in which one group has an overwhelmingly strong "squick" or "ew" reaction and another group does not are fascinating to me, not least of all because they seem to pop up quite frequently here, e.g. about Eliezer's OKCupid profile and NYC cuddle piles. Both sides spew huge amounts of ink explaining their emotional reactions, and yet there never seems to be any actual sharing of understanding. In the interests of trying harder...I was also very aggravated by the first few chapters of HPMOR, and would be happy to discuss it calmly here.)

It sounds like we're largely on the same page, noting that what counts as "disastrous" can be somewhat subjective.

Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.

In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, ... (read more)

Though note that an insurance may regardless be useful if you have self-control problems with regard to money. If you've paid your yearly insurance payment, the money is spent and will protect you for the rest of the year. If you instead put the money in a rainy day fund, there may be a constant temptation to dip into that fund even for things that aren't actual emergencies.

Of course, that money being permanently spent and not being available for other purposes does have its downsides, too.

7Metus
I appreciate the extention on my thought process. It is very clear to me that since you have to pay an insurance premium buying insurance is necessarily a net loss. Buying insurance is very meaningful before a rainy day fund is filled up, if emergency financing methods are not available through a credit card or very trustworthy person and if the insurance contracts include other services, e.g. getting liabilities of the other party paid in case of their unwillingness to pay. This is implicit in my phrasing but made explicit by your post and will be included in the end report. Generally I come to the conclusion that buying insurance is a necessity unless you are perversely rich and even then there is some meaning found in insurance as even insurance companies themselves are insured. Just go for contracts with high co-pay to lower the exposition to the insurance premium which is basically just unnecessary bureucracy in case of small claims, as in the example of the $1000 dollar computer. For things in that price class I read an interesting sentence "if you can not afford to buy it twice, you can't afford it in the first place" alluding to self-insurance.

In the publishing industry, it is emphatically not the case that you can sell millions of books from a random unknown author with a major marketing campaign. It's nearly impossible to replicate that success even with an amazing book!

For all its flaws (and it has many), Fifty Shades had something that the market was ready for. Literary financial successes like this happen only a couple times a decade.

Isn't that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can't be salvaged.

Once you've steelmanned, there should still be something that you disagree with. Otherwise you're not steelmanning, you're just making an argument you believe in.

7formido
If you take a position on virtually any issue that's controversial or interesting, there will be weaknesses to your position. Actual weaknesses. I thought the purpose of steelmanning was to find and acknowledge those weaknesses, not merely give the appearance of acknowledging weaknesses. If that's not right, then I think we need a new word for the latter concept because that one seems more useful and truth seeking. If you're stretching things beyond the domains of validity and using tricks, it sounds awfully like you're setting up straw men, at the very least in your own mind. Seems more debate club than rationality.
1Stuart_Armstrong
A much better phrasing of what I was thinking. But I think Kaj approach has some merit as well - we should find a name for "extracting the best we can from opposing arguments".

Part of the point of steelmanning, as I understand it, is to see whether there is a bit that can't be salvaged. If you correct the unnecessary flaws and find that the strengthened argument is actually correct (and, ostensibly, change your mind), it seems appropriate to still call that process steelmanning. Or rather, even if it's not appropriate, people seem to keep using it like that anyway.

If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.

If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.

That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order

The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.

1Scott Garrabrant
I think the A theory of time is effectively disproved by relativity. By the way, for those who do not know, these are actually called "the A theory of time" and "the B theory of time"
0DanielLC
Explain like I'm five.
3Jiro
... and the five year old won't understand those subtleties and will interpret it to mean something comforting but false. An answer to a question is one thing, and an answer that a five year old can understand is another. (Besides, if the five year old's parent loves her forever because the past is there, is that true for everything? Will her parent always be dying (since the death will have happened in the past)? Whenever she's punished, does that punishment last forever? Do you tell five year olds who have the flu that the flu will always be around forever?)

I'm not disagreeing with the general thrust of your comment, which I think makes a lot of sense.

But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.

We consider "write fizzbuzz from a description" to be a basic task of intelligence because it is for humans. But humans are the most complicate... (read more)

0V_V
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult. Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens. Note that it is unclear whether there is any way to achieve "general intelligence" other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence. I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known. The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence. Anyway, even if it turns out that "universal induction" techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.

It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course.

I will suppress urges to eat in order to have the optimal expe... (read more)

I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.

That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.

-2TheAncientGeek
So who would you kill if they stood between you and a good barbecue? ( it's almost like you guys haven't thought about what terminal means)
1Qiaochu_Yuan
Hmm. I guess I would describe that as more of an urge than as a terminal goal. (I think "terminal goal" is supposed to activate a certain concept of deliberate and goal-directed behavior and what I'm mostly skeptical of is whether that concept is an accurate model of human preferences.) Do you, for example, make long-term plans based on calculations about which of various life options will cause you to eat the most delicious barbecue?

Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted "best sounding" as "which will be the most effective term," and imagine others did as well. Strategic thinking is kind of our thing.

Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.

There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.

Spending ... (read more)

2HungryHobo
The test is a response to the Problem Of Other Minds. Simply, no other test will be accepted by people that [insert something non human here] is genuinely intelligent. The reasoning goes: strictly speaking the problem of other minds applies to other humans as well but we politely assume that the humans we're talking to are genuinely intelligent or at least conscious on little more than the basis that we're talking to them and they're talking back like conscious human beings. the longer and more involved the test the harder it is to use tricks to fake genuine intelligence.
2Stuart_Armstrong
It did seem like a useful tool for measuring (some types of) intelligence. Since it doesn't work, it would be useful to have a substitute...

It would absolutely be an improvement on the current system, no argument there.

Definitely something I'll need to be practicing! Here's my one line summary: A middle schooler takes inspiration from his favorite video games as he adjusts to the challenges life in a new school.

Interesting. Wouldn't Score Voting strongly incentivize voters to put 0s for major candidates other than their chosen one? It seems like there would always be a tension between voting strategically and voting honestly.

Delegable proxy is definitely a cool one. It probably does presuppose either a small population or advanced technology to run at scale. For my purposes (fiction) I could probably work around that somehow. It would definitely lead to a lot of drama with constantly shifting loyalties.

3John_Maxwell
It seems like it would solve US 3rd party voting issues, e.g. if I prefer Libertarians to Democrats to Republicans, I could give the Libertarian candidate 10/10, the Democratic candidate 10/10, and the Republican candidate 0/10.
4Will_BC
There is some incentive to vote strategically, but depending on the range and the other candidate on offer you might be better off voting honestly. If there's a candidate you dislike strongly, and a major candidate you only mildly dislike, you might give your favorite a 10, the mild dislike a 3, and the major dislike a 0, just to reduce the major dislike's chances. The worst case scenario, which you describe, is called bullet voting, and is basically identical to our current system, but if even a small proportion vote honestly it can improve the results. The researcher who made the graph at the bottom of rangevoting.org ran computer simulations of voter preferences compared with candidate values, and found that something like 10% of voters given their honest preference can improve results. I do recommend the book if you want to know more. I am very interested in delegable proxy, although it seems potentially dangerous and I think if it were implemented it would need to be tempered with some less democratic devices, but it could certainly make for some interesting drama.

Are there any methods for selecting important public officials from large populations that are arguably much better than the current standards as practiced in various modern democracies?

For instance in actual vote tallying like Condorcet seem to have huge advantages over simple plurality or runoff systems, and yet it is rarely used. Are there similar big gains to be made in the systems that leads up to a vote, or avoids one entirely?

For instance, a couple ideas:

  1. Candidates must collect a certain number of signatures to be eligible. A random selection of a
... (read more)
5Izeinwinter
Multilevel voting rounds have the problem that they end up representing elite interests to a very unhealthy degree - round one is likely to select representatives with impressive accomplishments and credentials. Which means the pool of voters for subsequent votes is now more or less entirely from the very top social strata, and as such is not likely to elect leadership responsive to needs of the people. This is not just theory- it has been tried, and the results were bad. The one I would actually like to see tried is rotating sortition. Representatives are selected for five years terms at random, one year where they are non-voting observers, then 4 years of service. (to counter the inherent problem of throwing people straight into the job)
2ChristianKl
Multiple rounds of liquid democracy based voting. You first select the top 10 candidates. Then the top 5, followed by the top 2 and then you decide for the last candidate.
6jacob_cannell
There are several research communities working on this and related problems, generally under the headings: Computational/Algorithmic Mechanism Design and Social/Public Choice Theory
3Will_BC
I read a very interesting book on election systems by William Poundstone called Gaming the Vote. His conclusion was that Score (aka Range) Voting was the best system on offer. A brief explanation can be found at rangvoting.org; it's a rather simple and intuitive system. As to idea number 2, I had a similar idea a while back, I called it fractal hierarchy, and a few thoughts occurred to me. First, it need not be democratic at all levels. I was thinking that if you wanted to select for rationality then the entry levels might not be very good at this. This led me to realize that this was rather similar to how the US military is structured, and they are generally positively regarded and considered quite meritocratic, so it might be a good way to do things. Another idea for legislative systems that I came across that is a merger between direct and representative democracy is called delegable proxy. The idea is that every member can vote on every issue, but they can choose to delegate their vote to a proxy voter, who can then choose to delegate all their votes to another voter, and so on, until you get a number of people with large chunks of votes. But for any issue, an individual can retract their vote(s) and vote how they wish. I think this system would allow for a lot of legislation to get passed, and would most strongly represent the popular will, but that is also it's greatest weakness, in that you get the issue of tyranny of the majority and ignorance of the masses playing a greater role. I am working on a project right now to put these and other ideas into practice, and will make a discussion post about it at some point in the future. If anyone is interested in helping me to better articulate my ideas before I post them, please let me know.
1Scott Garrabrant
In theory, if a small group of people can be trusted to pick a person among them who is at least slightly above average rationality of the group, you could add lots and lots of levels of voting for people who vote for people who vote for people who vote on the issues.

I turned in the first draft of my debut novel to my publisher. Now I get to relax for a few weeks before the real work starts.

4palladias
Mazel tov! Care to give us the equivalent of the back cover pitch?

I would think those would all be representable by a Turing Machine, but I could be wrong about that. Certainly, my understanding of the Ultimate Ensemble is that it would include universes that are continuous or include irrational numbers, etc.

3DanArmak
Turing Machines have discrete, not continous states. There is a countable infinity of Turing Machines.

Can I nominate for promotion to Main/Front Page?

3Shmi
Some time ago I suggested that (non-link, non-meta) Discussion posts should be automatically promoted to Main if they are upvoted above 20-30 karma. This post is currently well below.

I can certainly imagine a universe where none of these concepts would be useful in predicting anything, and so they would never evolve in the "mind" of whatever entity inhabits it.

Can you actually imagine or describe one? I intellectually can accept that they might exist, but I don't know that my mind is capable of imagining a universe which could not be simulated on a Turing Machine.

The way that I define Tegmark's Ultimate Ensemble is as the set of all worlds that can be simulated by a Turing Machine. Is it possible to imagine in any concrete... (read more)

1Shmi
I never said it could not be, just that the Turing Machine would not be a concept that is likely to evolve there. Imagine a universe where there are no discrete entities, so numbers/addition is not a useful model. Whatever inhabits such a universe, if anything, would not develop the abstraction of counting. This universe could still be Turing-simulated (Turing Machine is an abstraction from our universe), This is the essential point I am trying to make. Mathematics is determined by the structure of the universe and is not an independent abstract entity. I feel like I failed, though.
4DanArmak
The set of all Turing Machines is merely countable. Instead, imagine an ensemble of universes corresponding to the real numbers, some of which aren't even computable. For example, universes running on classical mechanics, where values can be measured with infinite precision, and which also have continuous space and time to represent those infinitely precise values. In other words, a set of universes where two universes can be different in the position of a particle, position being defined as a real number. This seems easy to imagine - it's a pretty standard Newtonian world.

There certainly needs to be some way to moderate out things that are unhelpful to the discussion. The question is who decides and how do they enforce that decision.

Other rationalist communities are able to discuss those issues without exploding. I assume that Alexander/Yvain is running Slate Star Codex as a benevolent dictatorship, which is why he can discuss hot button topics without everything exploding. Also, he doesn't have an organizational reputation to protect--LessWrong reflects directly on MIRI.

I agree in principle that the suggestion to simply di... (read more)

0ChristianKl
There's no ban in place on discussing politics. We do have highly controversial discussion about far out political ideas like neoreactionism.

I am afraid it would incentivize people to post controversial comments.

I'm not convinced that's a bad thing. It certainly would help avoid groupthink or forced conformity. And if someone gets upvoted for posting controversial argument A, then someone can respond and get even more votes for explaining the logic behind not-A.

So, what is your opinion on neoreaction, pick up artists, human biodiversity, capitalism, and feminism?

Just joking, please don't answer! The idea is that in a debate system without downvotes this is the thread where strong opinions would get many upvotes... and many people frustrated that they can can't downvote anymore, so instead they would write a reply in the opposite direction, which would also get many upvotes.

We wouldn't have groupthink and conformity. Instead, we would have factions and mindkilling. It could be fun at the beginning, but after a few months we would probably notice that we are debating the same things over and over.

Yes, that seems to be true. I didn't mean to cast it as a negative thing.

Looks to me like you were a victim of a culture of hyperdeveloped cynicism and skepticism. It's much easier to tear things down and complain than to create value, so we end up discouraging anyone trying to make anything useful.

Load More