Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?

New Comment
116 comments, sorted by Click to highlight new comments since: Today at 9:20 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I suspect this has been answered on here before in a lot more detail, but:

  • Evolution isn't necessarily trying to make us smart; it's just trying to make us survive and reproduce
  • Evolution tends to find local optima (see: obviously stupid designs like how the optical nerve works)
  • We seem to be pretty good at making things that are better than what evolution comes up with (see: no birds on the moon, no predators with natural machine guns, etc.)

Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get ... (read more)

0curi6y
Do you expect AGI to be qualitatively or quantitatively better at thinking than humans? Do you think there are different types of intelligence? If so, what types? And would AGI be the same type as humans? EDIT: By "intelligence" I mean general intelligence.

I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.

Isn't it convenient that I don't have to care about these infinitely many theories?

why not?

Why not what?

Why don't you have to care about the infinity of theories?

you can criticize categories, e.g. all ideas with feature X

How can you know that every single theory in that infinity has feature X? o

... (read more)
0Lumifer6y
Huh, that shaft ended in loud screech and a clang... Let's drop another shaft! I don't have to care about the infinity of theories because if they all make exactly the same predictions, I don't care that they are different. This is highly convenient because I am, to quote an Agent, "only human" and humans are not well set up to deal with infinities. How do you know that without examining the specific theories? Right, but the point is that you do not have solution at the moment and there is an infinity of theories which propose potential shovel-ready solutions. You have no basis for rejecting them because "I don't know of a solution with a shovel" -- they are new to you solutions, that's the whole point. Yes, of course, but you were claiming there are no such things as observations at all, merely some photons and such flying around. Being prone to errors is an entirely different question. Predictions do not come out of nowhere. They are made by models (= imperfect representations of reality) and "entity" is just a different word for a "model". If you don't have any entities, what exactly generates your predictions?
0curi6y
I don't find these replies very responsive. Are you trying to understand what I'm getting at, or just writing local replies to a selection of my points? This is not the first time I've tried to write some substantial explanation and gotten not much engagement from you (IMO).
0Lumifer6y
Oh, I understand what you are getting at. I just think that you're wrong. I'm writing local replies because fisking walls of text gets tedious very very quickly. There is no point in debating secondary effects when it's pretty clear that the source disagreement is deeper.
0curi6y
I'm going to end the discussion now, unless you object. I'm willing to consider objections. I'm stopping for a variety of reasons, some of which I talked about previously like your discussion limitations like about references. I think you don't understand and aren't willing to do what it takes to understand. If we stop and you later want to get these issues addressed, you would be welcome to post to the FI forum: http://fallibleideas.com/discussion-info
0Lumifer6y
s/understand/be convinced/g and I'll agree :-) Was a fun ride!

Here is a somewhat relevant video.

Has anyone here put much thought into parenting/educating AGIs?

I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.

I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

0Lumifer6y
My calculator can add large numbers much, much faster than I. That doesn't make it "quantitatively smarter". Given that no one has any idea about what that algorithm might look like, statements like this seem a bit premature.
0Tehuti6y
Your brain actually performs much more analysis each second that any computer we have: https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html Of course this is structurally very different from a CPU or a GPU etc, but the whole power of the brain is still way bigger.
0curi6y
I think AGIs will be built by evolution, and use evolution for their own thinking, because I think human thinking uses evolution (replication with variation and selection of ideas). I don't think any other method of knowledge creation is known, other than evolution.
2Lumifer6y
The scientific method doesn't look much like evolution to me. At a simpler level, things like observation and experimentation don't look like it, either.
0username26y
I went down the rabbit hole of your ensuing discussion and it seems to have broken LW, but didn't look like you were very convinced yet. Thanks for taking one for the team.
0Lumifer6y
Too deep we delved there, and woke the nameless fear... I suspect there is an implicit max thread depth and once it's reached, LW's gears and cranks (if only!) screech to a halt.
0curi6y
The scientific method involves guesses (called "hypotheses") and criticism (including by experimental tests). That follows the pattern of evolution (exactly, not by analogy): replication with variation (guessing), and selection (criticism).
2Lumifer6y
Not at all. Hypothesis generation doesn't look like taking the current view and randomly changing one element in it. More importantly, science is mostly teleological and evolution is not. But let's take a trivial example. Let's say I'm walking by a food place and I notice a new to me dish. I order it, eat it, and decide that it's tasty. I have acquired knowledge. How's that like evolution?
0curi6y
the way you decide it's tasty is by guessing it's tasty, and guessing some other things, and criticizing those guesses, and "it's tasty" survives criticism while its rivals don't. lots of this is done at an unconscious level. it has to be this way b/c it's the only known way of creating knowledge that could actually work. if you find it awkward or burdensome, that doesn't make it impossible – which puts it ahead of its rivals.
0Lumifer6y
The word you're looking for is "testing". I test whether that thing is tasty. Testing is not the same thing as evolution. That's an entirely circular argument.
0curi6y
Evolution is an abstract pattern which makes progress via the correction of errors using selection. If something fits the pattern, then it's evolution. Would you agree with something like: if induction doesn't work, and CR does, then it's a good idea to accept CR? Even if you find it counter-intuitive and awkward from your current perspective?
0Lumifer6y
I think we might be having terminology problems -- in particular I feel that you stick the "evolution" label on vastly broader things. First, the notion of progress. Evolution doesn't do progress not being teleological. Evolution does adapation to the current environment. A decrease in complexity is not an uncommon event in evolution, for example. A mass die-off is not an uncommon event, either. Second, evolution doesn't correct "errors". Those are not errors, those are random exploratory steps. A random walk. And evolution does not correct them, it just kills off those who misstep (which is 99.99%+ of steps). Sure. Please provide empirical evidence. And I still don't understand what's wrong with plain-vanilla observation as a way to acquire knowledge.
0curi6y
killing off a misstep is a way of getting rid of that error. the stuff that doesn't work is probabilistically removed from later generations – so the effect there is error correction. (experimenting itself isn't a mistake, but some of the experiments work badly – error). Evolution adapts, yes. Adapting something to solve a particular problem = creating knowledge of how to solve that problem. Biological evolution is limited in what problems it solves but still powerful enough to create human intelligence b/c of the ability for a single piece of knowledge to solve multiple problems. abstractly, guesses and criticism fits the pattern of evolution: there are generations of ideas. the ideas in the next generation aren't purely random, they retain some things that worked in the previous generation (to some extent we're seeing variation instead of something totally separate), and then criticism is selection. if you keep applying the same criticism over and over, you'll get ideas adapted to not being refuted by that criticism. Our disagreement is about philosophy. What do you observe (observation is lossy and there are many choices about where to focus your attention), and then what do you learn from it? Any set of observations fits infinitely many patterns.
2Lumifer6y
I still don't think so, but as I mentioned it's merely a terminology problem: you are using the word "evolution" in an unexpected way. Ah, well then. In this case I probably should inform you that your mistakes are due to the invisible dragon in my garage. When he gets gas, his dreams are troubled and seep into the minds of humans, corrupting their epistemology. See, he is a Philosophical Dragon. I observe a rock and learn that there is a rock in front of me.
0curi6y
why did you learn there was a rock in front of you, instead of an alien that looks like a rock? do you, perhaps, have a criticism of the alien suggestion?
0Lumifer6y
I cannot guarantee that it's not an alien that looks like a rock, but my priors insist that it's highly improbable. Me, no, but you might want to talk to that chap over there, William of Ockham...
0curi6y
so you prefer a dogmatic prior over criticisms which are themselves exposed to criticism?
0Lumifer6y
How is it dogmatic when a prior's sole purpose in life is literally to be updated, to change? Which criticisms? Where do they come from? Who makes them and for what reason?
0curi6y
not by critical arguments. humans make critical arguments, like the ones in this discussion.
0Lumifer6y
So we started here: There are just two of us here, me and the rock. If there are no humans around to make criticisms, I cannot acquire knowledge? If these critical arguments get to count as evidence, yes, by them, too. If they don't, well, that raises interesting questions.
0curi6y
You are a human who is present and can criticize. You're defining "evidence" differently than I am. I think evidence refers to what you might call empirical evidence. How do you incorporate critical arguments into probability updating?
0Lumifer6y
But I don't do that. My eyes send some information to my brain, my brain does, basically, pattern-matching and says "looks like a rock". Another part of the brain runs a sanity check ("Would seeing a rock be reasonable here? Yes.") and I'm done. In particular, I do NOT generate a large number of hypotheses about what that thing might be and internally criticize them. Easily enough. Valid critical arguments tend to point to empirical evidence which contradicts the hypothesis. Other than that, the only valid arguments that come to mind are those which demonstrate incoherency or internal contradictions.
0curi6y
We have massive philosophical differences. I think you're wrong in important ways and that your school of thought has been refuted by literature it hasn't answered (by e.g. Popper and Deutsch). Are you interested in resolving this in a serious, thorough way to a conclusion? I understand this would take a large effort by each of us.
0Lumifer6y
Depends on what "resolving" means. If you have in mind pinpointing the precise issues from which our disagreement stems, sure. But I don't think it would take a large effort. On the other hand, if what you have in mind is teaching me the proper way to do philosophy, that's much more problematic...
0curi6y
i mean figuring out the disagreements and discussing them and resolving the disagreements. actually figuring out which positions are correct and why. not just agreeing to disagree.
0Lumifer6y
I suspect we have disagreements about what does "correct" mean and what criteria of correctness we can use to establish it :-) But we can start by figuring out the precise questions to which we answer differently. Do you have any guesses?
0curi6y
You believe Bayesian Epistemology and I believe Critical Rationalism. They disagree about e.g. induction, empiricism, instrumentalism.
0Lumifer6y
Labels aren't terribly useful. Let's start with the basics. We'll probably agree that external/objective reality exists. That we can gain some knowledge of that reality, and that this knowledge cannot be perfect. So far so good? Thus we have reality and we have imperfect models of this reality in our heads. What happens when we have multiple models for same piece of reality?
0curi6y
Why not? They have meanings which people familiar with the field have substantial convergence about. Yes. Yes, and these models are not merely about prediction. Critical arguments.
0Lumifer6y
Because most discussions suffer from the problem of different people understanding the same word differently. This is especially pronounced for labels (aka shortcuts to complicated concepts). Hold on. First, is it acceptable to have multiple models at the same time? Do you have to declare one of them the best? It's not uncommon to have many models none of which you can falsify at the moment, how do you sort them out?
0curi6y
You can "have" multiple models in the sense of knowing about them and being able to use them if you wanted to. But if your N models all contradict, then at least N-1 of them are wrong. So you shouldn't simultaneously believe 2+ of them are true. You can always have a non-refuted idea about how to proceed in life (with low enough resource cost, not with e.g. infinite time). This stuff is covered at length but is complicated to learn. Are you interested in doing things like reading a bunch and discussing it as you go along so you can learn it?
0Lumifer6y
Is "true" a binary value or you can have fractions? Is it possible for a model to be X% true? Also, you have N models which contradict somewhere (otherwise they would be identical). You can't falsify any of them at the moment. How do you go about selecting between them? No. As I pointed out before, I am not interested in being taught.
0curi6y
Binary. I said we have answers to this but they are complicated, and you said you don't want to read enough to understand them. I don't know why you're repeating the question. Do you disbelieve me that understanding it in a few paragraphs of forum discussion is unrealistic? I'm totally open to discussing approaches to resolving disagreement, but I'm not open to you simply ignoring my suggestions about how to proceed and then trying to proceed in a way I don't think will work without saying why I'm mistaken about it being a bad approach. I'm also open to discussing where it's worth spending time and why, and how to decide that, and addressing skepticism. One approach is to start reading things and stop at the first thing you think is a mistake or have a question about, then comment. If you think you find a mistake, you only read more if it's fixed or you then discover you were mistaken about the mistake. What if you're mistaken? Is the plan to stay mistaken, or is there a way to become less wrong? Do you have some kind of alternative you think is better which lets you learn all the important things while e.g. avoiding reading?
0Lumifer6y
That's interesting. We've agreed that all models are imperfect representations of reality. Why are some imperfect models true? From a certain point of view all of them are false. Why, yes, I do. If you can't concisely draw at least the outlines of your position, I might even disbelieve that you understand your own views. You have reading comprehension problems. I didn't ignore them -- I explicitly said I don't agree to them. Let me repeat: I am not interested in being assigned a reading list. That's quite possible. But as I noted, reading comprehension is useful: I am quite interested in learning, I am quite uninterested in being taught, at least in this context.
0curi6y
I didn't say that they are. You didn't ask for an outline, you asked for an answer. Those are different things. It's hard to give outlines of solutions to people who are unfamiliar with the framework your solution is designed within, and who don't want to put in effort. Do you understand that problem? I also don't see the point of trying to write custom material for you (under the extra constraint of keeping it very short, and while having very limited info about you to customize with), when you don't want to read the canonical stuff, and I don't expect you to still be speaking to me in a few days (because of repeated indicators of hostility and disinterest). Under what circumstances would you learn or answer David Deutsch on epistemology?
0Lumifer6y
So what is it that you are you saying? All models are imperfect. If some are true, my question stands. If none are true, I don't see any use for the concept of "true". No, I don't. If you are unable to explain your position other than by saying "Go read the book, it explains everything", I have an inclination to think you yourself don't understand what you are trying to say. I don't expect you to "write custom material". I expect you to be able to hold a conversation where you are can put forward your views in a clear and concise manner. The thing is, if I want to go read Popper or Deutsch, I can go read Popper or Deutsch. I don't see what you will be able to add to my reading -- I can do it myself.
0curi6y
Would that be your belief if I wrote the book? If I was in the acknowledgements of the book? If my best friend wrote it and I'd discussed the material at length with him? If the author was a fan of mine? You seem to be trying to judge credentials in some way without saying where the lines are, and without asking any questions about mine. And I didn't say go read the book it explains everything, I said I don't want to rewrite the book for you, so start reading it and reply when you have your first criticism or question – just as if it was a forum post. You will have comments when you read – questions and criticisms – which we can discuss as they come up. That's different than reading it alone. Why do you want me to rewrite canonical material? Are you going to refuse to read any links or references of any kind, ever? What are the rules for when you do read those? Why don't you go read DD/Popper, with or without me? Have you answered them? Have you any reference answering them? If not, why leave outstanding criticisms unanswered? Isn't that problematic? The point is, you seem to draw some important distinction between 1) material I take responsibility for but didn't personally write (maybe you're assuming I won't take responsibility for things I reference the same as if I wrote them? I will.) 2) material I wrote in the past. 3) material I wrote specifically for this conversation. We have a massive archive of writing, some of it very polished. It refutes various claims LW makes that you seem to believe. You haven't answered it. You don't seem to know of anything that answers it. Yet you aren't interested. Under what circumstances would you be interested? If you want material to be customized for you in some way, i don't know what way it is. If you read a little canonical stuff and said "This isn't working for me b/c it targets audience X and I have trait Y" then I could help bridge that gap for you. But you haven't expressed any objection of that type.
0Lumifer6y
I looked you up. That clarifies a lot: Oh, and LOL at So, Mr. Expert On Everything (and "world class at several computer games"), I am sorry but I'm looking neither for a philosophy tutor nor for a life mentor. You want to teach me: I do not desire to be taught. We can talk about interesting problems, e.g. in epistemology, but if your position is that reading your favourite book has to be the beginning of all discussions, we're not going to get anywhere. Yes. Inability to clearly and succinctly formulate the main tenets points to a lack of understanding. I don't care if you're friends with Deutsch and spent a lot of time chatting with him. "Answering" Popperianism requires a book-length effort at least. I don't see any reason for me to spend that effort. As to references, Popper published many decades ago. Since the entire world hasn't converted to his views, I would expect to find a lot of references which disagree with him and Deutsch. Surely you're not arguing that there are none? No, I do not. I'm attempting to hold a small, local, mostly self-contained conversation about epistemology where we can build certain structures out of certain well-defined words and see if they fail under stress. You want to turn it into an educational "read the textbook" session. So, be specific. Which claims do you think I believe? Please list and refute. As to "a massive archive of writing", yes, indeed we do. Much of it disagrees with Popper and Deutsch. So what? Sigh. Let me repeat once again, in caps and bold: I DO NOT WANT TO BE TAUGHT BY YOU.
0curi6y
When I rewrite canonical material for this discussion, what should I change from the original? How should it differ from copy/pasting passages? Should I just paste stuff from sources and not tell you it's pasted, and then you'll engage with it? Where is the argument that CR is mistaken? CR provides arguments that Bayesian Epistemology is mistaken. Has anyone done it? If not, do you see a problem there? If so, can you give a reference that you will take responsibility for? I claim none of the existing criticism of CR is correct. I take it you don't know of any that's correct, but wish to ignore the matter anyway. Why? Under what circumstances do you think arguments should be answered instead of ignored? Only when popular? Laughing at me is rude and a non-argument. Yelling is also rude. You seem to be hostile to the idea of discussing methodology before discussing a particular topic, even though we disagree about methodology. Do you think discussion methodology is unimportant and boring?
0Lumifer6y
I don't expect you to be a copy-and-paste bot, I expect you to be able to hold a conversation. I don't particularly care whether you quote, modify, or invent from scratch. You have been remarkably resistant to a concise formulation of your views relevant to our discussion -- if you feel that nothing less than Deutsch's whole book can do, well, we have a problem. Of course you do :-) Notice how you have NOT presented any arguments to be answered. You merely pointed in the general direction of a philosophical theory which claims (don't they all?) to have the answers. I laugh a lot. Laughing is good. As to yelling, well, you ignored that sentence only what, three or four times? Do you hear me now? :-) Au contraire. I would like to discuss methodology, but "go read a book" is not discussion.
0curi6y
I'm trying to discuss the methodology of reading text until your first comment/question/criticism and then replying. You have been ignoring this. I did not ignore you about teaching, I heard you and I'm trying to have a peer discussion. But you keep interpreting things differently than I do. OK, understandable, but you need to be tolerant of different perspectives instead of yell. I am not yelling about being ignored about the methodology point about looking for the first mistake. I can give you specific sources with details but first I asked if you'd be willing to look at them and you said no, so that's why I didn't actually give you a specific reference. I'm also bringing up that there is literature criticizing your school of thought, which your school of thought seems to have no answer to – isn't that a problem? Or what is your methodology such that that is ignorable? Or do you deny this is the case? We disagree about e.g. induction. So you want me to rewrite one of the arguments about induction I've written in the past, because you don't want a reference. Right? I don't understand the purpose of this. Explain? It sounds like duplication of work to me.
0Lumifer6y
So you want to do exegesis. That makes the subject of the inquiry the text itself and the meaning contained in it. The issue is that I'm not particularly interested in the text and CR. I'm interested in basic epistemological approaches of which CR is merely one. It's basically the difference between dissecting frogs and reading a book about the proper ways to dissect frogs and what you would find if you cut one open. In this case I want to dissect frogs and not read books. I'm not ignoring it -- I'm explicitly telling you I don't want it. Oh boy. From the fact that you found me on LW you immediately deduced what my school of thought is? That might have been... hasty :-) And remember, I told you that labels are not terribly useful? We do? Did I say anything about induction? I'm sure there is a strawman waiting in the wings to be conveniently demolished, but what does it have to do with me?
0curi6y
I have been paying attention to what you wrote, e.g.: This statement indicates to me that we disagree about induction. What exactly do you think is different btwn a text by DD, a text by me, and new text typed by me into this forum? To me they are all text, but you treat them totally different. Plz explain the methodology. You express your disinterest in CR. Since I'm writing CR ideas, I take that as disinterest in what I'm saying. What would it take for you to become interested and try to address all known criticisms of your positions? Also do you have a website where you've written down your views to expose them to criticism, or do you have a reference which does this for you and which you'll take responsibility for?
0Lumifer6y
Induction isn't about acquiring knowledge from observations, induction is about generalizing from some limited set of observations to universal rules/laws. The key is a couple of comments up: Note: local. Note: self-contained. As I said, I don't care if you quote or write original text. What I'm looking for is small, specific, limited in scope. To the extent you're promoting/popularizing CR, yes, I'm uninterested in being swayed to its side. Time. Loads and loads of free time :-D Nope and nope. Sorry.
0curi6y
I don't understand how arguing with me about induction is going to prove your point that we don't disagree. Why do you want it to be local and self-contained? I don't want to exclude important ideas based on their source. I want to judge ideas by their substance, regardless of their source. But you started objecting to that, so here we are and I've tried many times to get you to clarify your methodology. I'm now trying again, despite the yelling and ridiculing. I also don't know what your rules are – if I wrote something a month ago, can I link that? Yesterday, but it was originally for some other conversation? So I've been trying to find out what your methodology rules are, because I literally don't know what you consider allowable in the conversation or not, plus also I think I disagree iwth your methodology (but I'm still trying to clarify it). What if it's correct and you're mistaken? This isn't a matter of sides, but truth. I read you as saying you don't care about the truth if CR is true, but I guess you mean something else – what? What would convince you to reallocate time? If you don't have time to think much, we could just stop now... I organized my life to have time to deal with ideas. Why not? Are you very interested in ideas? Are you young and new to trying to trying to understand things? Old and new? Don't see the value of a website or any kind of canonical statements of your views?
0Lumifer6y
Oh, but it's meta arguing! :-) In any case, the point is that you assume I hold some positions without any... support for these assumptions. ("Local" doesn't mean you can't bring it quotes from a book. It means none of your arguments are incorporated by reference but instead have to be fully included in the text of the thread) Basically to prevent the conversation from losing shape and clarity. Most philosophical discussions tend to sink into the quicksand of subtly (or not so subtly) different definitions for words used and degenerate into mutually-incomprehensible stand-offs or splotchy messes. Also -- a fun observation -- a lot of people adept at quoting from sources turn out to have a very shaky understanding of what these sources actually mean and what the implications are (this is a general observation, not aimed at you in particular). The rules are the rules of a conversation: you talk/type in easily digestible chunks, you can quote anything you want but don't use "pointers" (points over yonder: "that thing over there proves my point, go check it out if you doubt it"), pass your variables by value. It would help if you give hard definitions for the terms you use. We haven't figured out what does "correct" mean :-) My time and attention are limited. I don't feel establishing the validity of CR should be at the top of my to-do list. Changes in relative importance of things. There is a local saying coming from Eliezer that beliefs should pay rent. If the validity of CR starts to affect my life in major ways, I would reallocate time to thinking about it. And you realize, of course, that there are a great many more ideas than CR, so even you decided to dedicate your life to "deal with ideas", CR is still not the obvious choice. There is a variety of reasons. One is that I'm not particularly interested in converting everyone to my worldview. Another is that it changes on occasion. Yet another is that putting up a vanity website would do pretty much nothin
0curi6y
I bet it does. What do you do and what are some of your main philosophical beliefs which you would think it's important if they're mistaken? (I'll be happy to answer the same question though not with any use of pointers to my websites banned.) I reviewed all the well known options (and some but not all obscure ones – and I don't mind reviewing more obscure ones when someone interested in conversation brings one up) and made a judgement about which is correct and non-refuted, and that all the others are refuted by arguments I know. In epistemology, that one is CR. I would expect other people to attempt something like this, but I find they normally haven't – and don't want to begin. Does this sort of project interest you? If not, what sort of truth-seeking does interest you? And if you want me to put in extra work to use fewer references than I normally would – do you have any value to offer to motivate me to do this? For example, do you think you'll continue the conversation to a conclusion? Most people don't, and I currently don't expect you to, and I'd rather not jump through a bunch of hoops for you and then you just stop responding.
0Lumifer6y
What exactly is the falsifiable claim that you're making and how would you expect it to be falsified? :-) Oh, there are lot. Existence of afterlife, for example. The nature of morality. Things like that. How confident are you of your judgement? Not particularly because of lack of relevancy (see above about paying rent). I don't feel the need to pass a judgement on a set of options if that choice will lead to zero change. I don't expect this conversation to have a conclusion in the sense of general agreement that A is wrong and B is correct. I view it more as a -- to use a Culture name -- A Frank Exchange Of Views which might lead to new information being exchanged, new angles of view opened, maybe even new perspectives -- but nothing as decisive as a sharp-edged black-and-white conclusion.
0curi6y
Will you briefly indicate some specifics, especially things you think CR might disagree about? Very, because I've put a great deal of effort (as have some others) into doing this investigation, finding people who believe I'm mistaken and are willing to discuss, etc. There are no major outstanding leads left that need checking but haven't been checked. I genuinely don't know what more I could do that would make a big difference. I can do some lesser things like double check more things that have been singled checked, or make more websites and optimize them more and get more traffic to them so that there's more potential criticism (both raw traffic quantity and also getting specific smart ppl). Why do you think knowing what way of thinking is correct would lead to zero change? It led to tons of change for me. For you, I'd expect it to mean re-evaluating more or less your entire life and making huge changes. Areas of change-implication include parenting, relationships/marriage, how to discuss, induction, views on science and ways of judging scientific claims, approach to AGI, etc. Do you think that sort of conclusion is a valuable thing to reach in general? About some issues? I do.
0Lumifer6y
These things are orthogonal to CR, CR standing or falling does not affect them. That's precisely the reason I'm not terribly interested in heavily engaging with CR. From my point of view it's a bad sign. How so? I don't see why changing views on epistemology would lead a different approach to, say, marriage or parenting. Valuable, but rarely available for issues of importance.
0curi6y
Epistemology is the field which says how knowledge is created. Solutions to problems are a type of knowledge. How to solve problems in a marriage is therefore determined substantially by epistemology. Education of children is primarily an issue of helping them create knowledge. How to do this depends on how knowledge is created. You're mistaken about what is orthogonal to CR. You mentioned afterlife – what to believe about that is a matter of judging arguments (or put another way: creating knowledge of whether there is or isn't an afterlife), and for that you need epistemology which is the field that tells you the methods of discussing and evaluating ideas. You also mentioned morality. Moral argument is governed by epistemology, and also lots of morality is basically derived from epistemology because morality is about how to live and some of the key things about how to live are to live in an rational, error-correcting and problem-solving way. What if it was routinely available, if you knew how? That's what my epistemology says. So there's impact-on-life there! If you can suggest a way I should change my methods for judging this, please share it. (If you have preliminary questions first, feel free to ask them!)
0Lumifer6y
Cute play with words, but bears no relationship to the real world. Ditto for parenting. Ditto for afterlife. You're offering a version of the argument that since physics deals with the lowest (most basic) levels of matter, all other sciences are (or should be) physics: chemistry, biology, sociology, etc. So solving problems in marriage is physics because you are both made out of atoms. We have a basic disagreement: you think that models are either true or not, and I think, to quote George Box, that "All models are wrong but some are useful". Rely less on whether someone can successfully argue something and more on empirical reality.
0curi6y
I'm not playing with words, I'm expressing the CR perspective. You apparently disagree, but if CR is correct then what I said is correct. So CR's correctness has consequences for your life. I am not offering reductionism. Married people literally do things like discuss disagreements and try to solve problems – exactly the kind of thing CR governs. That doesn't mean CR is the only thing you need to know – you also need to know relationship-specific stuff (which you btw need to learn – and so CR is relevant there). I think many ideas aren't models. This is a CR belief which would have impacts on your thinking if you understood it and decided it was correct. Can you be more specific? How does anything I'm doing or saying clash with reality? Arguments about reality are totally welcome, and I've both sought them out and created them myself. BTW CR philosopher David Deutsch is literally a founder of a parenting/educational movement. Here is one of my essays about CR and parenting: http://fallibleideas.com/taking-children-seriously
0Lumifer6y
So what is the domain that CR claims? I thought it was merely epistemology, but apparently it includes marital counseling and parenting advice? By the way, your style pattern-matches to religious proselytizing very well. So far we had the underlying reality and imperfect representations thereof which we called "models". What is an "idea"? You said You're looking for criticism from people, not from reality. Think about it this way: let's say you have an idea about how to make a killing in financial markets. Your understanding of how to figure out whether it works is to ask all your friends and interested strangers (IRL and on the 'net) to criticize it. If they can't convince you it's bad, you declare it good. But there is another way -- you don't ask anyone's opinion, but instead actually attempt to trade it and see if it works. I prefer the second type of testing claims to the first one.
0curi6y
CR is an epistemology. It has implications, not domain claims. Methods of thinking are used in every field! Can you link an example? I'm skeptical but I'd like to read something similar to my writing. I've done both. But the primary issue here is critical argument, not testing, b/c it's about philosophy, not science. My tests are anecdotal and don't really matter to the discussion. If there's a particular test you think is important for me to do, what is it? EDIT forgot link about ideas http://fallibleideas.com/ideas
0Lumifer6y
You were much more gung ho about it just a little bit earlier: and on your website you're quite explicit that your approach can solve ALL problems. Not so much writing style, but argumentative style. Basically, you comments try to set in a number of hooks (like "This stuff is covered at length but is complicated to learn. Are you interested in doing things like reading a bunch and discussing it as you go along so you can learn it?" or "What do you do and what are some of your main philosophical beliefs which you would think it's important if they're mistaken?"), these hooks have a line and all lines lead back to "start reading this book and let's discuss it" which is where you really want to end up. And there is the promise that this philosophy will significantly influence my entire life. I see this as having a lot of parallels with classic proselytizing, say, Christian, where you set your hooks ("Are you unhappy? Does life make no sense to you?"), all lines lead to reading the Good News and inviting Jesus into your heart and, of course, once you accept Him into your life, that life is supposed to change dramatically. Note another disagreement point: about the relative value of critical arguments vs empirical testing :-) The standard one: does it work? For example, you are offering parenting advice. Does it work? How do you know? Ditto for all the other kind of life advice that you offer and want to charge for.
0curi6y
Yes my philosophy works great. I have a great life, lots of success, etc, etc. This is anecdotal and open to debate about how to interpret the test results. I don't wish to switch from debating ideas to sharing tons of personal info and debating my life choices (some of which are successful at non-standard values, and so will appear unsuccessful, and the right values have to be debated to judge it, and etc etc). Even if my personal life was a mess, that still wouldn't refute my philosophy. That wouldn't be an argument which refutes any particular epistemology claim. You seem to object to the concept of critical argument, and its role as the method of dealing with many issues. I don't see the difference. Implications are a big deal.
0Lumifer6y
I don't mean your personal life. You offer advice professionally. How do you know that you advice leads to desired outcomes? Does it? In which percentage of cases? Did you measure anything? I don't object to the concept. I object to it being sufficient to determine whether something is "true" (using your terminology) and to the idea that enough critical arguments can replace real-life testing. When people say "X has implications for this" and "This is determined substantially by X", these sentences usually have different meanings.
0curi6y
I have no interest in violating the privacy of my clients, or claiming my philosophy is good b/c of my consulting results. I'm not claiming that, so you don't need to challenge it. Such methods could not settle the philosophical issues, anyway. I might communicate badly, My clients might be a non-random sample of people with very ambitious goals. My clients might not do what I advised. etc, etc, etc. Any empirical results would be logically compatible with my philosophy being true. please don't paraphrase me incorrectly, in quote marks, while omitting any actual quote.
0Lumifer6y
What does this have to do with the privacy of your clients? I am not asking you to tell me stories, I'm asking whether you have any metrics of the performance of the product that you're selling. I thought you were Popperian. Is your philosophy empirically falsifiable, then? Direct quote:
0curi6y
Thanks for the quote; I was mistaken to say your paraphrase was incorrect. They're big implications. I don't see the point of this part of the discussion. Popperians say scientific ideas should be (empirically) falsifiable. Philosophy isn't empirically falsifiable, it's addressed by critical arguments. I do not use consulting metrics in marketing or other public statements; they relate to private matters; I'm not going to discuss them. However I thought of a better way to approach this: I’ve given lots of advice, for free, in public, with permalinks. So, unlike my private consulting, I’ll talk about that. Broadly here are the results: Some people love my advice. Super fans! A larger number of people don’t want to talk with me. Haters! (I'm intentionally saying the results are pretty polarized.) How is that to settle anything? Are we to go by popular opinion? You brought this topic up to try to get away from people. But I regard this as being about people! And btw I don't know what metrics you would consider appropriate for this. What I wanted to look at isn’t people but critical arguments, and my claim is that FI is non-refuted – meaning not just that no refutation is known to me, but also that no one else knows one who is willing to share it. I think it’s wise to survey the literature, take public comments, seek out discussions at a variety of forums, etc, in addition to thinking about it personally. That’s a worthwhile extra step to help find refutations. So the thing I was talking about, as I see it, was fundamentally about ideas (particularly critical arguments), not people; and the thing you’re bringing up is about what people do, how they react to advice, etc – about people rather than arguments/ideas. I was trying to talk about the current objective state of the intellectual debate; you’re bringing up the issue of how people react to me and what happens in their lives.
0Lumifer6y
Hold on, hold on. Your philosophy isn't abstract ruminations about the numbers of angels on the head of a pin. Your philosophy has implications. BIG implications. In fact, you're saying it changes people's lives! And these are phenomena of the empirical realm. We can look at them. We can evaluate them. We can see if the "implications" actually lead to consequences that your philosophy predicts and expects. Unless your philosophy just shrugs and says "Beats me, I have no idea what these interventions will do", it makes predictions about these implications. And the good thing about all these is that they are verifiable and falsifiable. So.. how about testing these implications? If they fail, would you insist it has no bearing on the philosophy? I agree, the public reaction to ideas doesn't tell you much. But how is this "a better way", then? I was talking mostly about the whole of reality, not just people, and my point is that critical arguments by themselves are insufficient. What is the word "objective" doing in there? No, I don't. You just did. I'm talking about testing your ideas in reality, in particular, by the simplest test of whether they work.
0curi6y
As before, you don't know how CR works, we have massive philosophical differences, and your questions are based on assuming aspects of your philosophy are true. Are you interested in understanding a different perspective, or do you just want to challenge my ideas to meet the criteria your framework says matter?
0Lumifer6y
I don't think so. At the moment we are operating in a very simple, almost crude, framework: there's reality, there are models, we can detect some mismatches between the reality and the models. Isn't falsification one of the favourite Popperian ideas? I am asking you questions, am I not? And offering you -- what do you call them? ah -- critical arguments.
0curi6y
I let you take substantial control over conversation flow. You took it here – you overestimated your knowledge of Popper and were totally wrong. You do not seem to have learned from this error. You didn't answer my question about your interest, and you seem totally lost as to what we disagree about. You're still, in response to "your questions are based on assuming aspects of your philosophy are true", making the same assumptions while denying it. You don't have anything like a sense of what we disagree about, but you're trying to lead the conversation anyway. Your questions are in service of lines of argument, not finding out what I think – and the lines of argument don't make sense because you don't know what to target.
0Lumifer6y
What exactly did I say that was totally wrong? Quote, please. These assumptions take half a sentence. There are exactly three of them: Which one do you think is unjustified? Supply me with targets, then :-D
0curi6y
Quoting: I regard this as indicating you misunderstand CR. Then later: In science, yes, testing is a favored idea, though even in science most ideas are rejected without being tested: http://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science But you don't want references, and I don't want to rewrite or copy/paste my blog post which is itself summarizing some information from books that would be better to look at directly. ---------------------------------------- I have a lot of targets on my websites, like http://fallibleideas.com and https://reasonandmorality.com, but you've said you don't want to look at them. Do you have a website with information I could skim to find disagreements? Earlier, IIRC, I tried to ask about some of your important beliefs but you didn't put forward some positions to debate. Is there any written philosophy material you think is correct, and would be super interested to learn contains mistakes? Or do you just think the ideas in your head are correct but they aren't written down, and you'd like to learn about mistakes in those? Or do you think your own ideas have some flaws, but are pretty good, so if I pointed out a couple mistakes it might not make much difference to you? What do you want to get out of this discussion? Coming to agree about some major philosophy issues would be a big effort. Under what sort of circumstances do you expect you would stop discussing? Do you have a discussion methodology which is written down anywhere? I do. http://curi.us/1898-paths-forward-short-summary I have a philosophy I think is non-refuted. I don't know of any mistakes and would be happy to find out. It's also written down in public to expose it to scrutiny.
0Lumifer6y
Your philosophy is advertised as "All problems can be solved by knowing how. I tell you how." This looks to me as crossing the demarcation threshold. Would you insist that there are no possible empirical observations which can invalidate you advice? You asked before. Still nope and nope. When you stop being interesting. Define "mistake".
0curi6y
You can bring up observations in a discussion of a piece of advice, but as always the role of the evidence is governed by arguments stating its role. And the primary issue here is argument. This is a theory claim. This is a claim that I have substantial problem solving knowledge for sale, but is not intended to indicate I already know full solutions to all problems. It's sufficiently non-specific that I don't think it's a very good target for discussion. Why are you interested now? http://fallibleideas.com/definitions And are you really unfamiliar with this common English word? Do you know what being wrong is? Less wrong? Error? Flaw? Are you trying to raise some sort of philosophical issue? If so, please state it directly. What about the rest?
0Lumifer6y
I'm interested in smart weird people :-P Oh, boy. We are having fundamental philosophical disagreements and you think dictionary definitions of things like "wrong" are adequate? You say that philosophy is not falsifiable. OK, let's assume that for the time being. So can we apply the term "wrong" to some philosophies and "right" to others? On which basis? You will say "critical arguments". What is a critical argument? Within which framework are you going to evaluate them? You want "mistakes" pointed out to you. What kind of things will you accept as a "mistake" and what kind of things will you accept as indicating that it's valid? I disagree that definitions are not all that important. Well, obviously I think they are correct to some degree (remember, for me "truth" is not a binary category). See above: what is a "mistake", given that we're deliberately ignoring empirical testing? Things I'd like to learn are more like new to me frameworks, angles of view, reinterpretations of known facts. To use Scott Alexander's terminology, I want to notice concept-shaped holes.
0curi6y
Criteria of mistakes are themselves open to discussion. Some typical important ways to point out mistakes are: 1) internal contradictions, logical errors 2) non sequiturs 3) a reason X wouldn't solve problem Y, even though X is being offered as a solution to Y 4) an idea assumes/uses and also contradicts some context (e.g. background knowledge) 5) pointing out a contradiction with evidence 6) pointing out ambiguity, vagueness there are many other types of critical arguments. for example, sometimes an argument, X, claims to refute Y, but X, if correct, refutes everything (or everything in a relevant category). it's a generic argument that could equally well be used on everything, and is being selectively applied to Y. that's a criticism of X's capacity to criticize Y. ---------------------------------------- Ideas solve problems (put another way, they have purposes), with "problem" understood very broadly (including answering questions, explaining an issue, accomplishing a goal). A mistake is something which prevents an idea from solving a problem it's intended to solve (it fails to work for its purpose). By correcting mistakes we get better ideas. We fix issues preventing our problems from being solved and our purposes achieved (including the purpose of correctly intellectually understanding philosophy, science, etc). We should prefer non-refuted ideas (no known mistakes) to refuted ideas (known mistakes).
0Lumifer6y
Ways to point out mistakes? Then the question remains: what is a "mistake"? A finger pointing at the moon is not the moon. Your (4) is the same thing as (1) -- or (5), take your pick. Your (5) is forbidden here -- remember, we are deliberately keeping to one side of the demarcation threshold -- no empirical evidence or empirical testing allowed. (6) is quite curious -- is being vague a "mistake"? In the real world? Then they are falsifiable and we can bring empirical evidence to bear. You were very anxious to avoid that. Looks like a non sequitur: generating new (and better) ideas is quite distinct from fixing the errors of old ideas -- similar to the difference between writing a new program and debugging an existing one. I would argue that we should prefer ideas which successfully solve problems to ideas which solve them less successfully (demarcation! science! :-D)
0curi6y
I actually wrote a sentence Do you not read ahead before replying, and don't go back and edit either? In general, yes. It technically depends on context (like the problem specification details). Normally e.g. the context of answering a question is you want an adequately clear answer, so an inadequately clear answer fails. Ideas solve intellectual problems, and some of those solutions can be used to solve problems we care about in the real world by acting according to a solution. Some problems (e.g. in math) are more abstract and it's unclear what to use the solutions for. I have nothing against the real world. But even when the real world is relevant, you still have to make an argument saying how to use some evidence in the intellectual debate. The intellectual debate is always primary. You can't just directly look at the world and know the answers, though sometimes the arguments involved with getting from evidence X to rejecting idea Y are sufficiently standard that people don't write them out. You are welcome to mention some evidence in a criticism of my philosophy claims if you think you see a way to relevantly do that. You have idea X (plus context) to solve problem P. You find a mistake, M. You come up with a new idea to solve P which doesn't have M. Whether it's a slightly adjusted version of X (X2) or a very different idea that solves the same problem is kinda immaterial. Both are acceptable. Methodologically, the standard recommendation is to look for X2 first. I consider solving a problem to be binary – X does or doesn't solve P. And I consider criticisms to be binary – either they are decisive (says why the idea doesn't work) or not. Problems without success/failure criteria I consider inadequately specified. Informally we may get away with that, but when trying to be precise and running into difficult issues then we need to specify our problems better.
0Lumifer6y
That's a curious definition of a "mistake". It's very... instrumental and local. A "mistake" is a function of both an idea and a problem -- therefore, it seems, if you didn't specify a particular problem you can't talk about ideas being mistaken. And yet your examples -- e.g. an internal logical inconsistency -- don't seem to require a problem to demonstrate that an idea is broken. Oh, I'm sure it's relieved to hear that Why is that? That's an interesting claim. An intellectual debate is what's happening inside your head. You are saying that it's primary compared to the objective reality outside of your head. Am I understanding your correctly? Only if a problem has a binary outcome. Not all problems do. A black-and-white vision seems unnecessary limiting. Consider standard statistics. Let's say we're trying to figure out the influence of X on Y (where both are real values). First, there is no sharp boundary between a solution and a not-solution. You can build a variety of statistical models which will make different trade-offs and produce different results. There is no natural dividing line between a slightly worse model which would be a not-solution and a slightly better model which will be a solution. Moreover, since these different models are making trade-offs, you can criticise these trade-offs, but generally speaking it's difficult to say that this one is outright wrong and that one is clearly right. There's a reason they're called trade-offs. Typically at the end you pick a statistical model or an ensemble of models, but the question "is the problem solved, yes or no?" is silly: it is solved to some extent, not fully, but it's not at the "we have no idea" stage either. Life must be very inconvenient for you. By the way, what about optimization problems? The goal is to maximize Y by manipulating X. There is no threshold, you want Y to be as large as possible. What's the criterion for success?
0curi6y
This is not local – I specified context matters (whether the context is stated as part of the problem, or specified separately, is merely a matter of terminology.) You can't determine whether a particular sentence is a correct or incorrect answer without knowing the context – e.g. what is it supposed to answer? The same statement can be a correct answer to one issue and an incorrect answer to a different issue. If you don't like this, you can build the problem and the context into the statement itself, and then evaluate it in isolation. I'm guessing the reason you consider my view on mistakes "instrumental" is because I think one has to look at the purpose of an idea instead of just the raw data. It's because I add a philosophy layer where you don't. So your alternative to "instrumental" is to say something like "mistakes are when ideas fail to correspond to empirical reality" – and to ignore non-empirical issues, interpretation issues, and that answers to questions need to correspond to the question which could e.g. be about a hypothetical scenario. To the extent that questions, goals, human problems, etc, are part of reality then, sure, this is all about reality. But I'm guessing we can both agree that's a difference of perspective. Self-contradictory ideas are broken for many problems. In general, we try to criticize an idea as a solution to a range of problems, not a single one. Those criticisms are more interesting. If your criticism is too narrow, it won't work on a slight variant of the idea. You normally want to criticize all the variants sharing a particular theme. Self-contradictory ideas can (as far as we know) only be correct solutions to some specific types of problems, like for use in parody or as a discussion example. Because facts are not self-explanatory. Any set of facts is open to many interpretations. (Not equally correct interpretations or anything like that, merely logically possible interpretations. So you have to talk about your interpre
0Lumifer6y
Oh, I agree. It's just that you were very insistent about drawing the line between unfalsifiable philosophy and other empirically-falsifiable stuff and here you're coming back into the real-life problems realm where things are definitely testable and falsifiable. I'm all for it, but there are consequences. Sure, but that's not an intellectual debate. If someone asks how to start a fire and I explain how you arrange kindling, get a flint and a steel, etc. there is no debate -- I'm just transferring information. Not necessarily. If you put your hand into a fire, you will get a burn -- that's easy to learn (and small kids learn it fast). Which philosophy issues are prior to that learning? No can do. But tell you what, the fewer silly things you say, the less often you will encounter overt sarcasm :-) Which problems you can't solve otherwise? There are lot of issues with continuous (real number) decisions. Let's say you're deciding how much money to put into your retirement fund this year and the reasonable range is between $10K and $20K. You are not going to treat $14,999 and $15,000 as separate solutions, are you? Sure they do, but not always. And your approach requires them. I still don't see the need for these rather severe limitations. You want to deal with reality as if it consists of discrete, well-delineated chunks and, well, it just doesn't. I understand that you can impose thresholds and breakpoints any time you wish, but they are artifacts and if your method requires them, it's a drawback. Yes, but you typically have an explore-or-exploit problem. You need to spend resources to look for a better optimum, at each point in time you have some probability of improving your maximum, but there are costs and they grow. At which point do you stop expending resources to look for a better solution?
0curi6y
if you have an empirical argument to make, that's fine. but i don't think i'm required to provide evidence for my philosophical claims. (btw i criticize the standard burden of proof idea in Yes or No Philosophy. in short, if you can't criticize an idea then it's non-refuted and demanding some sort of burden of proof is not a criticism since lack of proof doesn't prevent an idea from solving a problem.) the problem of induction. problems about how to evaluate arguments (how do you score the strength of an argument? and what difference does it really make if one scores higher than another? either something points out why a solution doesn't work or it doesn't. unless you specifically try to specify non-binary problems. but that doesn't really work. you can specify a set of solutions are all equal. ok then either pick any one of them if you're satisfied, or else solve some other more precise problem that differentiates. you can also specify that higher scoring solutions on some metric are better, but then you just pick the highest scoring one, so you get a single solution or maybe a tie again. and whether you've chosen a correct solution given the problem specification, or not, is binary.) and various problems about how you decide what metrics to use (the solution to that being binary arguments about what metrics to use – or in many cases don't use a metric. metrics are overrated but useful sometimes.) Yes so then you guess what to do and criticize your guesses. Or, if you wish, define a metric with positive points for a higher score and negative points for resources spent (after you guess-and-criticize to figure out how to put the positive score and all the different types of resources into the same units) and then guess how to maximize that (e.g. define a metric about resources allocated to getting a higher score on the first metric, spend that much resources, and then use the highest scoring solution. multi-factor metrics don't work as well as people think, but ar
0Lumifer6y
It depends on what do you want to do with them. If all you want to do is keep them on a shelf and once in a while take them out, dust them, and admire them, then no, you don't. On the other hand, if you want to persuade someone to change their mind, evidence might be useful. And if you want other people to take action based on your claims', ahem, implications, evidence might even be necessary. It seems that the root of these problems is your insistence that truth is a binary category. If you are forced to operate with single-bit values and have to convert every continuous function into a step one, well, sure you will have problems. The thread seem to be losing shape, so let's do a bit of a summary. As far as I can see, the core differences between us are: * You think truth (and arguments) are binary, I think both have continuous values; * You think intellectual debates are primary and empirical testing is secondary, I think the reverse; Looks reasonable to you?
0curi6y
the two things you listed are ok with me. i'd add induction vs guesses-and-criticism/evolution to the list of disagreements. do you think there's a clear, decisive mistake in something i'm saying? can you specify how you think induction works? as a fully defined, step-by-step process i can do today? though what i'd prefer most is replies to the things i said in my previous message.
0Lumifer6y
I would probably classify it as suboptimal. It's not a "clear, decisive mistake" to see only black and white -- but it limits you. In the usual way: additional data points increase the probability of the hypothesis being correct, however their influence tends to rapidly decline to zero and they can't lift the probability over the asymptote (which is usually less than 1). Induction doesn't prove anything, but then in my system nothing proves anything. What you said in the previous message is messy and doesn't seem to be terribly impactful. Talking about how you can define a loss function or how you can convert scores to a yes/no metric is secondary and tertiary to the core disagreements we have.
0curi6y
the probability of which hypotheses being correct, how much? how do you differentiate between hypotheses which do not contradict any of the data?
0Lumifer6y
For a given problem I would have a set of hypotheses under consideration. A new data point might kill some of them (in the Popperian fashion) or might spawn new ones. Those which survive -- all of them -- gain some probability. How much, it depends. No simple universal rule. For which purpose and in which context? I might not need to differentiate them. Occam's razor is a common heuristic, though, of course, it is NOT a guide to whether a particular theory is correct or not.
0curi6y
Do all the non-contradicted-by-evidence ideas gain equal probability (so they are always tied and i don't see the point of the "probabilities"), or differential probability? EDIT: I'm guessing your answer is you start them with different amounts of probability. after that they gain different amounts accordingly (e.g. the one at 90% gains less from the same evidence than the one at 10%). but the ordering (by amount of probability) always stays the same as how it started, apart from when something is dropped to 0% by contradicting evidence. is that it? or do you have a way (which is part of induction, not critical argument?) to say "evidence X neither contradicts ideas Y nor Z, but fits Y better than Z"?
0Lumifer6y
Different hypotheses (= models) can gain different amounts of probability. They can start with different amounts of probability, too, of course. Of course. That's basically how all statistics work. Say, if I have two hypotheses that the true value of X is either 5 or 10, but I can only get noisy estimates, a measurement of 8.7 will add more probability to the "10" hypothesis than to the "5" hypothesis.
0curi6y
what do you do about ideas which make identical predictions?
0gjm6y
They get identical probabilities -- if their prior probabilities were equal. If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here "simpler" means something like "when turned into a completely explicit computer program, has shorter source code". Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.) In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)
0Lumifer6y
I don't need to distinguish between them, then.
0curi6y
so you don't deal with explanations, period?
0Lumifer6y
I do, but more or less only to the extent that they will make potential different predictions. If two models are in principle incapable of making different predictions, I don't see why should I care.
0curi6y
so e.g. you don't care if trees exist or not? you think people should stop thinking in terms of trees and stick to empirical predictions only, dropping any kind of non-empricial modeling like the concept of a tree?
0Lumifer6y
I don't understand what this means. The concept of a tree seems pretty empirical to me.
0curi6y
there are infinitely many theories which say trees don't exist but make identical predictions to the standard view involving trees existing. trees are not an observation, they are a conceptual interpretation. observations are things like the frequencies of photons at times and locations.
0Lumifer6y
Isn't it convenient that I don't have to care about these infinitely many theories? Since there is an infinity of them, I bet you can't marshal critical arguments against ALL of them :-P I think you're getting confused between actual trees and the abstract concept of a tree. I don't think so. Human brains do not process sensory input in terms of " frequencies of photons at times and locations".
0curi6y
why not? you can criticize categories, e.g. all ideas with feature X. i don't think so. you can't observe entities. you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)
0Lumifer6y
Why not what? How can you know that every single theory in that infinity has feature X? or belongs to the same category? My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat's nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate. Or not? Prediction matters, but entities are an awfully convenient way to make predictions.
0ChristianKl6y
I don't think you are supposed to use it for the important models.
1Lumifer6y
The ones too important to be falsified? :-D
0Elo6y
You read the same book as me! "Theory And Reality - Peter Godfrey Smith". I am surprised you say this. What you describe is the hypothetico-deductive method (https://en.wikipedia.org/wiki/Scientific_Method pictured here is the hypothetico-deductive method, wikipedia is wrong and disagrees with it's own sources). The hypothetico-deductive method involves guesses but the scientific method according to that book is about: 1. observation 2. measurement (and building models that can be predictive of that measurement) 3. standing on the shoulders of the extisting body of knowledge. 4. ??? 5. Profit! Edit: that wiki page has changed a lot over the last few months and now I am less sure about what it says.
0curi6y
I don't understand what reading a book has to do with it, or what you wish me to take from the wikipedia link. In my comment I stated the CR position on scientific method, which is my position. Do you have a criticism of it?
0curi6y
i think humans don't use their full computational capacity. why expect an AGI to? in what way do you think AGI will have a better algorithm than humans? what sort of differences do you have in mind?
0siIver6y
It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity. AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.
0curi6y
If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That's easy, right? Because they get stuck, unhappy, bored, superstitious ... all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won't make it immune to dishonesty, wishful thinking, etc. Right? Humans have fast access to facts via google, databases, and other tools, so memorizing isn't crucial. I thought they talked about things like biases. Couldn't an AGI be biased, too?
0Lumifer6y
For fun ways in which NN classifiers reliably fail, google up adversarial inputs :-) Example
0Elo6y
Rubbish in, rubbish out - right?
0Lumifer6y
No, not quite. It's more like "let us poke around this NN and we'll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it".