All of Yoreth's Comments + Replies

Yoreth80

I previously posted Was the K-T event a Great Filter? as a pushback against the notion that different lineages of life on Earth evolving intelligence is really "independent evidence" in any meaningful sense. Intelligence can evolve only if there's selective pressure favoring it, and a large part of that pressure likely comes from the presence of other intelligent creatures competing for resources. Therefore mammals and birds together really should only count as one data point.

(It's more plausible that octopus intelligence is independent, since the marine b... (read more)

6Julian Bradshaw
Interesting thought. I think you have a point about coevolution, but I don't think it explains away everything in the birds vs. mammals case. How much are birds really competing with mammals vs. other birds/other animals? Mammals compete with lots of animals, why did only birds get smarter? I tend to think intra-niche/genus competition would generate most of the pressure for higher intelligence, and for whatever reason that competition doesn't seem to lead to huge intelligence gains in most species. (Re: octopus, cephalopods do have interactions with marine mammals. But also, their intelligence is seemingly different from mammals/birds - strong motor intelligence, but they're not really very social or cooperative. Hard to compare but I'd put them in a lower tier than the top birds/mammals for the parts of intelligence relevant to the Fermi Paradox.) In terms of the K-T event, I think it could plausibly qualify as a filter, but asteroid impacts of that size are common enough it can't be the Great Filter on its own - it doesn't seem the specific details of the impact (location/timing) are rare enough for that.
Yoreth10

Curb Your Enthusiasm - I didn't know you could be anonymous and tell people! I would've taken that option!

This is a good chance for me to interrogate my priors because I share (although not very strongly) the same intuitions that you criticize in this post. There's tension between the following and my desire not to live in a bland tall-poppy-syndrome dystopia where nobody ever wants to accomplish great things; I don't really know how I'd resolve it.

Intuition 1: Social praise is a superstimulus which titillates the senses and disturbs mental tranquility. Wh... (read more)

Yoreth30

Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.

I think this relates to the notion that constructing convincing falsehoods is more difficult and costly than discovering truths, because (a) the more elaborate a falsehood is, the more likely it ... (read more)

Yoreth40
  1. Reward yourself after each session.

What kinds of rewards do you use for this?

4matto
It depends on what I need at a given time. Some times, it's some chocolate or other food that I like. Other times, it will be a walk or 10-20 minutes of exercise. When I finish a piece and I feel it went well, I'll spend a few minutes during a walk going over the process and feeling good.
Yoreth*70

Consider the following charts:

Chart 1

Chart 2

Chart 1 shows the encephalization quotient (EQ) of various lineages over time, while Chart 2 shows the maximum EQ of all known fossils from any given time. (Source 1, Source 2. Admittedly this research is pretty old, so if anyone knows of more recent data, that'd be good to know.)

Both of these charts show a surprising fact: that the intelligence of life on Earth stagnated (or even decreased) throughout the entire Mesozoic Era, and did not start increasing until immediately after the K/T event. From this it appears that life ha... (read more)

Yoreth70

Such a category is called paraphyletic. It can be informationally useful if the excluded subgroup is far-divergent from the overarching group, such that it has gained characteristics not shared by the others, and lost characteristics otherwise shared. But the less divergence has taken place, the harder it is to justify a paraphyletic category. The category "reptile" (excluding birds) makes sense today, but it wouldn't have made sense in the Jurassic period. The mammal/cetacean distinction is somewhere in the middle.

Animal/human is different because the evo... (read more)

3Jiro
Human intelligence counts as "gained characteristics not shared by the others". I think you're thinking that it doesn't count as a lot of divergence, but "a list of divergences with few items" doesn't mean "not a lot of divergence". Human intelligence has an effect on the environment and capabilities of humans that is equal or greater than the effect of the differences between birds and reptiles.
Yoreth10

I’m also not sure how far non-core and core identity rationalism are mutually exclusive. (Just like a lot of people are vaguely christian without belonging to a church, so maybe a lot of people would be vaguely interested in rationalism without wanting to join their local temple)

Agreed; finding a way for multiple levels of involvement to coexist would be helpful. Anecdotally, when I first tried attending LW meetups in around 2010, I was turned off and did not try again for many years, because the conversation was so advanced I couldn't follow it. But wh... (read more)

Yoreth50

A few thoughts on this.

First, I probably have a higher appetite for religion-ifying rationalism than others in the community, but I wouldn't want to push my preferences too hard lest it scare people off. This may stem from my personal background as a cradle atheist. Religious people don't want rationality to become rivalrous with their religion, and ex-religionists don't want it to become they very thing they escaped. To the extent that it's good for rationality to become more religion-like, I think it'll happen on its own in the next few decades or centur... (read more)

2Srdjan Miletic
I agree this is a risk. Both due to culty vibes and people not wanting a religion. I'm not sure in practice whether growing rationalism as a core identity would lead to less or more rationalists. I'm also not sure how far non-core and core identity rationalism are mutually exclusive. (Just like a lot of people are vaguely christian without belonging to a church, so maybe a lot of people would be vaguely interested in rationalism without wanting to join their local temple) I don't think there needs to be a specific, world-altering plan in order for a rationalist religion to be something worth pursuing. If you believe as I do that rationalism makes people better human beings, is morally right and leads to more open, free, just and advanced societies, then creating and spreading it is good pretty much irrespective of social circumstances. So I think I depart quite strongly from the lesswrong consensus here. Lesswrong has about, what, 200 active members? The broader group of people who would consider themselves rationalists or rationalist adjacent is probably less than 10'000. The world has a population of 600 Billion people. Even assuming only a tiny proportion of people are naturally inclined towards rationalism, I really don't think we're anywhere close to addressing the full market. A few things to bear in mind: * Rationalist content is mostly in english. Most people don't speak/read english. Even those that do as a second language don't consumer primarily english sources * Rationalism is niche and hard to stumble upon. It's not like christianity or left/right ideology in the west. Whereas those ideologies are broadcasted at you constantly and you will know about them and roughly what they represent, rationalism is something you only find if you happen to just luck out and stumble on this weird internet trail of breadcrumbs.
Yoreth20

1-3 months doesn't seem so bad as a timeline. While it's important not to let the perfect be the enemy of the good (since projects like this can easily turn into a boondoggle where everyone quibbles endlessly about what the end-product should look like), I think it's also worth a little bit of up-front effort to create something that we can improve upon later, rather than getting stuck with a mediocre solution permanently. (I imagine it's difficult to migrate a social network to a new platform once it's already gotten off the ground, the more so the more people have joined.)

1mingyuan
Well, Raemon said it's 1-3 months of work, but I'm a bit concerned that those 1-3 months might not start for another year or so, due to the LW team being busy as heck with a bunch of other priorities. I do agree that it's worth putting real thought into this though, and not starting out on a platform just so we can start out, if it might end up being the case that we want to use a different platform later.
Yoreth60

I would also like to register my opposition to using Facebook. While it might seem convenient in the short term, it makes the community more fragile by adding a centralized failure point that's unaccountable to any of its members. Communicating on LessWrong.com has the virtue of it being owned by the same community that it serves.

4mingyuan
I share a general opposition to Facebook. However, I'm not sure what would be a reasonable alternative. I've tried setting up Google Groups and Slacks for coordination of this type before, but those platforms have a bit of a 'talking in a library' problem - if they're inactive, they generally remain inactive. There's also the problem of needing to use a platform that people use all the time anyway. Slack is good for me, because I'm in multiple active Slack workspaces, but lots of people use it only for work or don't want multiple workspaces. Google Groups are okay on this axis because they can go to people's emails, but there's some magic startup energy that needs to go into making a Google Group active, and I don't know what it is (critical mass?). As for the LW suggestion - I don't feel that LessWrong currently has the infrastructure to support something similar to a Facebook group, and even if the LW team was willing to build something like that, they have dozens of other priorities. In addition, a lot of the groups I'm targeting identify as SlateStarCodex meetups and don't have buy-in to LessWrong either as a platform or as a thing they want to identify with. So, yes, I'm definitely open to alternatives to Facebook. I guess at this point a Google Group feels like the best option, but I'm not optimistic about it. Very open to continuing this conversation here or elsewhere.
Yoreth40

It seems to me that there's a tension at the heart of defining what the "purpose" of meetups is. On the one hand, the community aspect is one of the most valuable things one can get out of it - I love that I can visit dozens of cities across the US, and go to a Less Wrong meetup and instantly have stuff to talk about. On the other hand, a community cannot exist solely for its own sake. Someone's personal interest in participating in the community will naturally fluctuate over time, and if everyone quits the moment their interest touches zero then nobody wi

... (read more)
Yoreth130

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys"... (read more)

Yoreth00

Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?

2Alicorn
I'd be willing to die (including as part of a legal sentence) to save that many people. (Not that I wouldn't avoid dying if I could, but if that were a necessary part of the saving-people process I'd still enact said process.) I wouldn't kill someone I believed unwilling, even for the same purpose, including via trolley.
Yoreth40

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

4Cyan
I have also artificially induced an Ugh Field in myself. A few months ago, I was having a horrible problem with websurfing procrastination. I started using Firefox for browsing and LeechBlock to limit (but not eliminate) my opportunities for websurfing instead of doing work. I'm on a Windows box, and for the first three days I disabled IE, but doing so caused knock-on effects, so I had to re-enable it. However, I knew that resorting to IE to surf would simply recreate my procrastination problem, so... I just didn't. Now, when the thought occurs to me to do so, it auto-squelches.
Yoreth00

I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on.

(What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)

0[anonymous]
The other problem is this: if there is a disaster that wipes out such a large percentage of the Earth's population, the few people who did survive it would probably be in very isolated areas and might not have access to any of the knowledge we've been talking about anyway. Still, it is interesting to look at what knowledge our civilization rest on. It seems to me that a lot of the infrastructure we rely on in our day-to-day lives is "irreducibly complex"--for example, we know how to make computers, but this is not a necessary skill in a disaster scenario (or our ancestral environment).
0Blueberry
I am not following this. Why would the anthropic principle no longer apply if you went into space?
Yoreth20

But apparently it still wasn't enough to keep them together...

1wedrifid
Yoreth may subtract 50 MHG points from hegemonicon but also loses 15 himself.
4Blueberry
Not all relationships need to last forever, and it's not necessarily a failure if one doesn't.
Yoreth50

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

5KrisC
Maps. Locations of pre-disaster settlements to be used as supply caches. Locations of structures to be used for defense. Locations of physical resources for ongoing exploitation: water, fisheries, quarries. Locations of no travel zones to avoid pathogens.
3jimrandomh
Presupposing that only a limited amount of knowledge could be saved seems wrong. You could bury petabytes of data in digital form, then print out a few books' worth of hints for getting back to the technology level necessary to read it.
2[anonymous]
Let's examine the problem in more detail: Different disaster scenarios would require different pieces of information, so it would help if you knew exactly what kind of catastrophe. However, if you can preserve a very large compendium of knowledge, then you can create a catalogue of necessary information for almost every type of doomsday scenario (nuclear war, environmental catastrophe, etc.) so that you will be prepared for almost anything. If the amount of information you can save is more limited, then you should save the pieces of information that are the most likely to be useful in any given scenario in "catastrophe-space." Now we have to go about determining what these pieces of information are. We can start by looking at the most likely doomsday scenarios--Yoreth, since you started the thread, what do you think the most likely ones are?
1Eneasz
How to start a fire only using sticks. How to make a cutting blade from rocks. How to create a bow, and make arrows. Basic sanitation.
0ianshakil
I only need one item: The Holy Bible (kidding)
0xamdam
Depends what level you want to achieve post-catastrophe; some, if not most of your resources and knowledge will be needed to deal with specific effects. In short, your suitcase will be full of survivalist and medical material. In an thought experiment where you freeze yourself until the ecosystem is restored, you can probably use an algorithm of taking the best library materials from each century, corrected for errors, to achieve the level of that century. Both Robinson Crusoe and Jules Verne's "Mysterious Island" and explore similar bootstrapping scenarios, interestingly both use some "outside injections".
JoshuaZ110

I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is... (read more)

3RobinZ
In rough order of addition to the corpus of knowledge: 1. The scientific method. 2. Basic survival skills (e.g. navigation). 3. Edit: Basic agriculture (e.g. animal husbandry, crop cultivation). 4. Calculus. 5. Classical mechanics. 6. Basic chemistry. 7. Basic medicine. 8. Basic political science.
1mstevens
I'm tempted to say "a university library" as the short answer. More specifically, whatever I could get from the science and engineering departments. Pick the classic works in each field if you have someone to filter them. Look for stuff that's more universal than specific to the way we've done things - in computing terms, you want The Art of Computer Programming and not The C Programming Language. In the short term, anything you can find on farming and primitive medicine - all the stuff the better class of survivalist would have on their bookshelf.
2[anonymous]
A dead tree copy of Wikipedia. A history book about ancient handmade tools and techniques from prehistory to now. A bunch of K-12 school books about math and science. Also as many various undergraduate and postgraduate level textbooks as possible.
Yoreth80

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence... (read more)

2Larks
I've always been told that Hegel basically affixed the section about Prussia due to political pressures, and that modern philosophers totally ignore it. Having said that, I wouldn’t read Hegel. I recommend avoiding reading original texts, and instead reading modern commentaries and compilations. 'Contemporary Readings in Epistemology' was the favoured first-year text at Oxford. Bertrand Russell's "History of Western Philosophy" is quite a good read too. The Stanford Encyclopaedia of Philosophy is also very good.

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought ... (read more)

1zero_call
This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end. I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.
4JoshuaZ
Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows: Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution" All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits. This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
2Emile
I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
1Bongo
Maybe LW should have resident intellectual historians who read philosophy. They could distill any actual insights from dubious, old or badly written philosophy, and tell if a work is worthy reading for rationalists.
4mindviews
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist. I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile. I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in. PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.
8Vladimir_M
Yoreth: That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
2wedrifid
None that actively affiliate themselves with the label 'philosophy'.
Yoreth90

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. E... (read more)

1Roko
http://www.nickbostrom.com/papers/anthropicshadow.pdf
0NancyLebovitz
From what I've heard, there was a lot of talk about bomb shelters, but very few of them were built.
5cousin_it
If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation. If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse. (These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)
Yoreth70

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of t... (read more)

3timtyler
Re: "can a mind understand itself?" That is no big deal: copy the mind a few billion times, and then it will probably collectively manage to grok its construction plans well enough.
0timtyler
Re: "If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening." ? I see plenty of scaremongering around machine intelligence. So far, few governments have supported it - which seems fairly sensible of them.
2NancyLebovitz
Another argument against the difficulties of self-modeling point: It's possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common. It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself. Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn't trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed. What if it just works on having a better understanding of math, logic, and probability?
0NancyLebovitz
How do we know that governments aren't secretly working on AI? Is it worth speculating about the goals which would be built into a government-designed AI?

Two counters to the majoritarian argument:

First, it is being mentioned in the mainstream - there was a New York Times article about it recently.

Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attemp... (read more)

-1[anonymous]
Regarding majoritarianism: Crash programs in basic science because of speculative applications are very uncommon. Decades of experimentation with nuclear fission only brought a crash program with the looming threat of the Nazis, and after a practical demonstration of a chain reaction. Over the short time spans over which governments make their plans, the probability of big advances in AI basic science coming is relatively small, even if substantially over the longer term. So you get all the usual issues with attending to improbable (in any given short period) dangers that no one has recent experience with. Note things like hurricane Katrina, the Gulf oil spill, etc. The global warming effects of fossil fuel use have been seen as theoretically inevitable since at least the Eisenhower administration, and momentum for action has only gotten mobilized after a long period of actual warming providing pretty irrefutable (and yet widely rejected anyway!). evidence.
2xamdam
In addition to theoretical objections, I think the majoritarian argument is factually wrong. Remember, 'future is here, just not evenly distributed'. http://www.google.com/trends?q=singularity shows a trend http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all - this week in NYT. Major MSFT and GOOG involvement. http://www.acceleratingfuture.com/michael/blog/2010/04/transhumanism-has-already-won/
9DanArmak
This is strictly true if you're talking about the working memory that is part of a complete model of your "mind". But a mind can access an unbounded amount of externally stored data, where a complete self-representation can be stored. A Turing Machine of size N can run on an unbounded-size tape. A von Neumann PC with limited main memory can access an unbounded-size disk. Although we can only load a part of the data into working memory at a time, we can use virtual memory to run any algorithm written in terms of the data as a whole. If we had an AI program, we could run it on today's PCs and while we could run out of disk space, we couldn't run out of RAM.
4Roko
Stephen Hawking, Martin Rees, Max Tegmark, Nick Bostrom, Michio Kaku, David Chalmers and Robin Hanson are all smart people who broadly agree that >human AI in the next 50-100 years is reasonably likely (they'd all give p > 10% to that with the possible exception of Rees). On the con side, who do we have? To my knowledge, no one of similarly high academic rank has come out with a negative prediction. Edit: See Carl's comment below. Arguing majoritarianism against a significant chance of AI this century is becoming less tenable, as a significant set of experts come down on the "yes" side. It is notable that I can't think of any very reputable nos. The ones that come to mind are Jaron Lanier and that Glenn Zorpette.
cousin_it110

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.

If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a millio... (read more)

1mindviews
The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it. I don't have any articles but I'll take a stab at counterarguments. A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried "wolf!" too much 30 years ago and now their predictions aren't given much weight because of that bad track record. A mind can't understand itself counterargument: Even accepting as a premise that a mind can't completely understand itself, that's not an argument that it can't understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.
5Morendil
I'd just forget the majoritarian argument altogether, it's a distraction. The second question does seem important to me, I too am skeptical that an AI would "obviously" have the capacity to recursively self-improve. The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification. It's possible - and I find it intuitively plausible - that there is some inherent general limit to a mind's capacity for self-knowledge, self-understanding and self-modification. But an intuition isn't an argument.
Yoreth70

This seems to be another case where explicit, overt reliance on a proxy drives a wedge between the proxy and the target.

One solution is to do the CEV in secret and only later reveal this to the public. Of course, as a member of said public, I would instinctively regard with suspicion any organization that did this, and suspect that the proffered explanation (some nonsense about a hypothetical "Dr. Evil") was a cover for something sinister.

2blogospheroid
Since I wrote about Extrapolated Volition as a solution to Goodhart's law, I think I should explain why i did so. Here, what is sought is friendliness (your goal - G), whereas the friendliness architecture, the actual measureable thing, the goal is the proxy(G*). Extrapolated volition is one way of avoiding G* divergence from G because when one extrapolates the volition of the persons involved, one gets closer to G. In friendly AI, the entire living humanity's volition is sought to be extrapolated. Unfortunately, this proxy, like any other proxy, is subject to hack attacks. The scale of this problem is such that other solutions proposed cannot be utilised. EDIT : edited for grammar in 3rd para
Yoreth160

Hi!

I've been registered for a few months now, but only rarely have I commented.

Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."

Yoreth00

Suppose we had a G.O.D. that takes N bits of input, and uses the input as a starting-point for running a simulation. If the input contains more than one simulation-program, then it runs all of them.

Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulation-program will be higher the shorter the program is (not just because a shorter bit-string is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink t... (read more)

Yoreth10

How so? Could you clarify your reasoning?

My thinking is: Given that a scientist has read (or looked at) a paper, they're more likely to cite it if it's correct and useful than if it's incorrect. (I'm assuming that affirmative citations are more common than "X & Y said Z but they're wrong because..." citations.) If that were all that happened, then the number of citations a paper gets would be strongly correlated with its correctness, and we would expect it to be rare for a bad paper to get a lot of citations. However, if we take into accou... (read more)

0byrnema
I see: in the case that a paper is read, deciding a paper sounds iffy and deciding not to cite it would correlate strongly with deciding not to cite a paper with wrong conclusions. I was considering that scientists rarely check the conclusions of the papers they cite by reading them, but just decide based on writing and other signals whether the source is credible. So a well-written paper with a wrong conclusion could get continued citations. But indeed, if the paper is written carefully and the methodology convincing, it would be less likely that the conclusion is wrong.
Yoreth90

I am reminded of a paper by Simkin and Roychowdhury where they argued, on the basis of an analysis of misprints in scientific paper citations, that most scientists don't actually read the papers they cite, but instead just copy the citations from other papers. From this they show that the fact that some papers are widely cited in the literature can be explained by random chance alone.

Their evidence is not without flaws - the scientists might have just copied the citations for convenience, despite having actually read the papers. Still, we can easily imagin... (read more)

0lakeswimmer
a question if referencing is not based on knowledge or perhaps even relevance what does this imply for Google algorithm? does it not organize search responses according to page links?
3byrnema
How so? Could you clarify your reasoning? * Scientists cite journals with conclusions that are convenient to cite (either because they corroborate their views or define a position to pivot from or argue with) whether or not they have been read. Journals with easily debunked conclusions might equivalently be not read (and thus unexamined) or read (and simply trusted). * I think that the real test for whether cited publications are read or not is the following: if a publication is consistently cited for a conclusion it does not actually present, then this is evidence of no one actually having read the publication. I recall in my research that it was very convenient in the literature to cite one particular publication for a minor but foundational tenet in the field. However, when I finally got a hard-copy of the paper I couldn't find this idea explicitly written anywhere. The thing is -- contradicting what I say above, unfortunately -- I think the paper was well-read, but people don't double-check citations if the citation seems reasonable.
1PhilGoetz
That's great! I've wondered why so many mathematical papers (in non-math subject areas) contain misprints and omissions that make their equations uninterpretable. I'm wondering if even the referees and editors read them. And I have a confession. I didn't read all of the papers I referenced!
4Paul Crowley
I copy citations from other papers. When I can, I copy and paste BibTeX stanzas I find on the Web.
1thomblake
Indeed this is commonplace for all academic fields, though I don't see the problem with it, so long as the effect doesn't squash new work.