All of gedymin's Comments + Replies

gedymin00

you end up observing the outputs of models, as suggested in the original post.

I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.

What is it that makes their accumulated knowledge worthy of being relied upon ?

Usefulness? Just don't say "experimental evidence". Don't oversimplify epistemic justification. There are many aspects - how well knowledge fits with existing models, with observations, what is it's predictive power, what is it's instrumental value (does it help to achieve one's goals) etc. F... (read more)

gedymin00

How do you evaluate whether any given model is useful of not?

One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.

If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.

Solomonoff induction provides a universal standard for "perfect" inductive inference, that is, learning from observations. It is not entirely parameter-free, so... (read more)

0Bugmaster
Right, but I meant, in practice. Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post. What is it that makes their accumulated knowledge worthy of being relied upon ?
gedymin-10

I've got feeling that the implicit LessWrong'ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. "true is that what is knowable using probability theory". May also throw in "..for a perfect computational agent".

My speculation is that the declared LW's sympathy towards the correspondence theory of truth stems from political / social reasons. We don't want to be confused with the uncritically thinking masses - the apologists of homoeopathy or astrology justifying their views by "yeah, I don... (read more)

1Bugmaster
I think this statement underscores the problem with rejecting the correspondence theory of truth. Yes, one can say "homeopathy works", but what does that mean ? How do you evaluate whether any given model is useful of not ? If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard. All you've got left are your internal thoughts and feelings, and, as it turns out, certain goals (such as "eradicate polio" or "talk to people very far away") cannot be achieved based on your feelings alone.
gedymin10

What do you mean by "direct access to the world"?

Are you familiar with Kant? http://en.wikipedia.org/wiki/Noumenon

gedymin00

This description fits philosophy much better than science.

gedymin00

Sounds like a form of abduction, or, more precisely, failure to consider alternative hypotheses.

gedymin80

As for your options, have you considered the possibility that 99% of people have never formulated a coherent philosophical view on the theory of truth?

2Sabiola
Better make that 99.99%, including myself.
gedymin00

I'd love to hear a more qualified academic philosopher discuss this, but I'll try. It's not that the other theories are intuitively appealing, it's that the correspondence theory of truth has a number of problems, such as the problem of induction.

Let's say the one day we create a complete simulation of a universe where the physics almost completely match ours, except some minor details, such as that some specific types of elementary particles, e.g. neutrinos are never allowed to appear. Suppose that there are scientists in the simulation, and they work out... (read more)

gedymin50

I meant that as a comment to this:

the information less useful than what you'd get by just asking a few questions.

It's easy to lie when answering to questions about your personality on e.g. a dating site. It's harder, more expensive, and sometimes impossible to lie via signaling, such as via appearance. So, even though information obtained by asking questions is likely to be much richer than information obtained from appearances, it is also less likely to be truthful.

0Vulture
Oh, I see, haha. Yes, that makes more sense, and your point is well-taken.
gedymin00

..assuming the replies are truthful.

0Vulture
Why would anyone bother to send in false data about their finger-length ratios?
gedymin40

I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".

gedymin20

Mountaineering or similar extreme activities is one option.

gedymin40

Are there any moral implications of accepting the Many Worlds interpretation, and if so what could they be?

For example, if the divergent copies of people (including myself) in other branches of Multiverse should be given non-insignificant moral status, then it's one more argument against the Epicurean principle that "as long as we exist, death is not here". My many-worlds self can die partially - that is, just in some of the worlds. So I should to reduce the number of worlds in which I'm dead. On the other hand, does it really change anything compared to "I should reduce the probability that I'm dead in this world"?

-6TheAncientGeek
4Manfred
Nope, not really. With no math, the thing is, the different "branches" take up a fraction of the world. Classically you might say "If eating cake is 2 units of utility and not eating cake is 0 units, then a 50% chance of cake is 1 unit." Quantum mechanically, you'd say "If eating cake is 2 units of utility and not eating cake is 0 units, then 50% of my current measure going to eating cake is 1 unit." See Egan's Law.
1Sabiola
I really, really hate the 'many worlds' idea, and I hope it's not true. All those almost-me's in all those worlds –what are they doing? Some of them may be great people, but lots of them are bound to be much worse than me. Every time my flipping imagination comes up with something horrible I could do, does some other Berna in some other world really do it? No, no, no, no – I really, really don't want to even think about that.
gedymin70

Is there some reason to think that physiognomy really works? Reverse causation is probably the main reason, e.g. tall people are more likely to be seen as leaders by others, so they are more likely to become leaders. Nevertheless, is there something beyond that?

5Strilanc
The 2014 LW survey results mentioned something about being consistent with a finger-length/feminism connection. Maybe that counts? Some diseases impact both reasoning and appearance. Gender impacts both appearance and behavior. You clearly get some information from appearance, but it's going to be noisy and less useful than what you'd get by just asking a few questions.
Vaniver110

Is there some reason to think that physiognomy really works?

It is the case that appearances encode lots of information, because lots of things are correlated. For example, height correlates with intelligence, probably because of generic health factors (like nutrition). Nearsightedness and intelligence are correlated, but whether this is due to different use of the eyes in childhood or engineering constraints with regards to the brain and the skull is not yet clear. The aspect ratio of the face correlates with uterine testosterone levels, which correlate... (read more)

gedymin20

Funny, I thought escaping in their own private world was not something exclusive to nerds. In fact most people do that. Schoolgirls escape in fantasies about romance. Boys in fantasies about porn. Gamers in virtual reality. Athletes in fantasies about becoming famous in sport. Mathletes - about being famous and successful scientists. Goths - musicians or artists. And so on.

True, not everyone likes to escape in sci-fi or fantasy, but that's because different minds are attracted by different kinds of things. D&D is a relatively harmless fantasy. I'm not... (read more)

0[anonymous]
Excellent point. The defining characteristic is here escaping into heroic fantasy. LOTR, Star Wars, Dragonlance Chronicles (in my youth), superhero comics. What does that suggest? A person fantasizing about superpowers does feel disempowered, don't you think? Yes, the overlap is an issue, nerds don't fully self-identify as a group, the Linux guy will not high-five the anime guy saying "we are bros". It is not really a clearly defined one group. And I am thinking of the second guy. However nerds who suffer are clearer. See: http://www.reddit.com/r/justneckbeardthings/ Focus on the suffering subgroup and you get it clearer.
gedymin20

the solution will involve fixing things that made one a "tempting" bullying target

So a nerd, according to the OP, is someone who:

  • lacks empathy and interest in other people
  • lacks self confidence
  • has unconventional interests, ideas, and appearance

But even if take for granted that this is a correct description of a nerd, these are very different issues and require very different solutions.

The last problem is simple to fix at the level of society and ought to be fixed there. A hate against specific social groups should not be acceptable, not ... (read more)

2[anonymous]
I need to write clearer. That is not my main thesis. My main thesis is my OMFG-level striking, shocking relevation that surprised by out of my mind, namely that e.g. obsessing over D&D is not merely a hobby or interest, but a desire to escape from a life and self you hate. This filled up with compassion and made me remember my former self who was not far from that. That hobbies and interests, in this case nerdy ones, predict problems. You can diagnose certain issues by looking at people's hobbies and interests. This is my main thesis. The rest is digging deeper trying to figure out the reasons, and less important. I think you misunderstood the group-hate thing. The kids are talking about were not yet groups at the ages of 8 or 10 when this happened to them, and actually I think it is a dangerous bias today to see every social dynamic as group relation, ignoring individual relations. It seems after it was discovered that racism is a thing and a bad one, now everybody who was individually oppressed wants to invent their own "race". So for example gays went from just being individuals who like gay sex and get hated by other individuals for it to inventing their own group and identity, essentially inventing a "race" and thus re-casting the hatred they get from individual hatred to group hatred. I am quite puzzled by this. Is there a rational reason for this? Are humans hardwired to hate groups more than individuals? At any rate I think your point is adult nerds being another "race", which is a problem itself, but my real problem is that these kids were not yet nerds. Seeing this as a group level oppression dynamic is very wrong at this 8-10-12 year old age. It was individuals, who were perceived as weak, and thus got oppressed for it. There was no identity of a weak-boy-group, it was not invented as a "race", although later on they became adult nerds and then yes they some extent invented themselves as a "race". So it is not that nerds were hated as kids. Weak kids
gedymin50

If UGC is true, then one should doubt recursive self-improvement will happen in general

This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.

To sidetrack a bit, as I've argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the specul... (read more)

3JoshuaZ
Replying separately(rather than editing), to make sure this comment gets seen. It is worth noting that although UGC may not be true, weaker versions of UGC get many similar results. So for example, one can get most results of UGC if one believes that NP <= P^U/poly where U is a unique games oracle. Then one would get similar results under the assumption that the polynomial hierarchy does not collapse. Also if one instead of believing that UGC is NP-complete one believes that UGC has no polynomial time algorithm then you get some of the same results or very similar results. Since many people don't assign that high a probability to UGC, it may be worth asking what evidence should cause us to update to assign a higher probability to UGC (short of a proof). There seem to be four obvious lines of evidence: First, showing that some problems expected to be NP-intermediate (e.g. factoring, graph isomorphism, ring isomorphism) can be reduced to unique games. (Remark: the obvious candidate here would be graph isomorphism. I'm not aware of such a reduction but my knowledge of the literature here is not very good.) This would be strong evidence for at least some of the weaker versions of UGC, and thus evidence for UGC. Second, showing that the attempted methods to solve unique games fail for intrinsic reasons that cannot be overcome by those methods. Right now, the two noteworthy methods are Sums of Squares and QAOA. If there are fundamental barriers to variants of either method working that would make UGC substantially more plausible. Third, proving more of the inapproximation results implied by UGC by methods independent of UGC. There's been some progress on this, but it may turn out that the inapproximations implied by UGC are the correct bounds even as UGC is false. At the same time, even as these sorts of results are proved, they make UGC less directly useful for trying to actually estimate whether recursive self-improvement can substantially occur. Fourth, it may be
8JoshuaZ
Sure. There's a basic argument that if P != NP then we should expect recursive self-improvement to be hard because many of the problems that would be involved in self-improvement (e.g. memory management, circuit design, protein folding) are NP-hard or NP-complete. This argument in this form suffers from two problems: First, it isn't clear what the constants in question are. There could be an exponential time algorithm for solving your favorite NP-complete problem but the algorithm is so efficient on instances of any size that matters that it doesn't end up impacting things. It isn't completely clear how serious this problem is. On the one hand, as of right now, it looks like "polynomial time algorithm if and only if has practical algorithm" is a decent heuristic even though it has exceptions. On the other hand, we've seen even in the last few years important examples where careful improvements to algorithms can drastically improve practical running speed. One major example is linear programming. Second, most of the NP-complete problems we want to solve in practice are optimization problems. So one might hope that even if a problem is NP-hard, getting close to an optimal solution will still be doable efficiently. Maybe for most of these problems, we can expect that as the problem instances get large we actually get really efficient approximations for things which are very close to the best possible solutions. UGC addresses both these points- the first point in a weak way, and the second point in a strong way. Prior to the Unique Games Conjecture, there was a major theorem called the PCP theorem. This is one of the few unambiguously deep results in computational complexity and it has important results that say that in many cases approximation is tough. If UGC is true, then the idea that approximation is tough becomes even more true. Consider for example the problem of minimal vertex cover(which shows up in a bunch of practical contexts and is NP-complete). Now, t
gedymin00

Why do you think that the fundamental attribution error is a good point where to start someone's introduction in rational thinking? There seems to be a clear case of the Valley of bad rationality here. Fundamental attribution is a powerful psychological tool. It allows us to take personal responsibility for our successes while blaming the environment for our failures. Now assume that this tool is taken away from a person, leaving all his/her other beliefs intact. How exactly would this improve his/her life?

I also don't get why thinking that "the rude ... (read more)

2Gleb_Tsipursky
Apologize for being unclear: this post is one in a series of posts, not the first, so it's not an introduction to rational thinking. Here is the blog post that we already published that introduces people to the idea of agency as a key overarching framework, and here is another blog post that does the same with System 1 and 2. These are the introductory blog posts, and now we are doing some further elaboration on rational thinking. Regarding the specific case of the FAE, I presented on this bias to my students (I'm a college professor at Ohio State), for example in this video and had nice feedback. One wrote in an anonymous form that "With relation to the fundamental attribution error, it can give me a chance to keep a more open mind. Which will help me to relate to others more, and view a different view of the “map” in my head.” My experiences presenting to students informs this blog post. However, I will keep in mind what you said about the valley of bad rationality, that's a good point - I'll run the article by some beginner rationalists and see what they think about the issue. Can you clarify your point about negative examples, I'm not quite clear on what you mean. Thanks a lot for the constructive criticism, really helpful!
gedymin10

I don't think that overfitting is a good metaphor for your problem. Overfitting involves building a model that is more complicated than an optimal model would be. What exactly is the model here, and why do you think that learning just a subset of the course's material leads to building a more complicated model?

Instead, your example looks like a case of sampling bias. Think of the material of whole course as the whole distribution, and of the exam topics as a subset of that distribution. "Training" your brain with samples just from that subset is ... (read more)

1TrE
That is exactly what most students do. Source: Am student, have watched others learn.
gedymin40

There is a semi-official EA position on immigration

Could you describe what this position is? (or give a link) Thanks!

2Dias
Full open boarders, although Michelle partly disagreed here, and many have concerns about immigration's effects on domestic policy/crime etc.
gedymin10

We usually don't worry about personality changes because they're typically quite limited. Completely replacing brain biochemistry would be a change on a completely different scale.

And people occasionally do worry about these changes even now, especially if they're permanent, and if/when they occur in others. Some divorces are made because the partner of a person "does not see the same man/woman she/he fell in love with".

gedymin40

Taxing the upper middle class is a generally good idea; they are the ones most capable and willing to pay taxes. Many European countries apply progressive tax rates. Calling it a millionanaire tax is a misnomer, or course, but otherwise I would support that (I'm from Latvia FYI)

Michael O. Church is certainly an interesting writer, but you should take into account that he basically is a programmer with no academic qualifications. Most of his posts appear to be wild generalizations of experiences personally familiar to him. (Not exclusively his own experiences, of course.) I suspect that he suffers heavily from the narrative fallacy.

1Viliam_Bur
The reason why I like those articles are that they are compatible with my (very limited) experience with what I consider upper-class people. Of course I may be wrong, or me and the author can share the same bias, etc... I am just saying that there is more for me than merely interesting writing. Taxing the middle class allows the government to get a lot of money easily: those are people who already have enough money that can be taken, but not enough money to defend themselves. From that angle, it is a good idea. On the other hand, the strategy of "taking money where taking money is easy" contributes to increasing the gap between the rich and the poor, and to elimination of the most productive part of the population, which some people would consider a bad idea. Unfortunately, the consequences of getting more tax money come quickly, and the consequences of destroying the middle class take more time.
gedymin20

A good writeup. But you downplay the role of individual attention. No textbook is going to have all the answers to questions someone might formulate after reading the material. They also won't provide help to students who get stuck doing exercises. In books, it's either nothing or all (the complete solution).

The current system does not do a lot of personalized teaching because the average university has a tightly limited amount of resources per student. The very rich universities (such as Oxford) can afford to give a training personalized to a much larger extent, via tutors.

2richard_reitz
Yeah. I've taught myself several courses just from textbooks, with much more success than in traditional setups that come with individual attention. I am probably unusual in this regard and should probably typical-mind-fallacy less. However, I will nitpick a bit. While most textbooks won't quite have every answer to every question a student could formulate whilst reading it (although the good ones come very close), answers to these questions are typically 30 seconds away, either on Wikipedia or Google. Point about the importance of having people to talk to still stands. Also, some textbooks (e.g. the AoPS books) have hints for when a student gets stuck on a problem. Point about the importance of having people to help students when they get stuck still stands, although I believe the people best-suited to do this are their classmates; by happy coincidence, these people don't cost educational organizations anything. I'm tinkering with a system in which a professor, instead of lecturing, has it as their job to give each of 20 graduate students an hour a week of one-on-one attention (you know, the useful type of individual attention), which the graduate student is expected to prepare for extensively. Similarly, each graduate student is tasked with giving undergraduates 1 hour/week of individual attention. This maintains a professor:student ratio of 200:1 (so MIT needs a grand total of... 57 professors), doesn't overly burden the mentors, and gives the students much more quality individual attention than I sense they're currently getting. (Also, I believe that 1 hour of a grad student's time is going to be more helpful to a student than 1 hour of a professor's time. Graduate students haven't become so well-trained in their field they're no longer able to simulate a non-understanding undergrad in their head (an inability Dr. Mazur claims is shared among lecturers) and I expect there's benefit from shrinking the age/culture gap. Also, no need to worry about appearing to
gedymin50

What are some good examples of rationality as "systematized winning"? E.g. a personal example of someone who practices rationality systematically for a long time, and there are good reasons to think doing that has substantially improved their life.

It's easy to name a lot of famous examples where irrationally has caused harm. I'm looking for the opposite. Ideally, some stories that could interest intelligent, but practically minded people who have no previous exposure to the LW memeplex.

9Shmi
Scott Adams claims being rational in that sense in his book.
Vaniver110

The easiest examples are typically business examples, but there's always the risk of the thing people attributing their success to not being the actual cause of their success. ("I owe it all to believing in myself" vs. "I owe it all to sleeping with the casting director.")

I think the cleanest example is Buffet and Munger, whose stated approach to investing is "we're not going to be ashamed of only picking obviously good investments." They predated LW by a long while, but they're aware of the Heuristics and Biases literature (consider this talk Munger gave on it in 1995).

gedymin60

The answer to the specific question about technetium is "it's complicated, and we may not know yet", according to physics Stack Exchange.

For the general question "why are some elements/isotopes less or more stable" - generally an isotope is more stable if it has a balanced number of protons and neutrons .

gedymin00

I know what SI is. I'm not even pushing the point that SI not always the best thing to do - I'm not sure if it is, as it's certainly not free of assumptions (such as the choice of the programming language / Turing machine), but let's not go into that discussion.

The point I'm making is different. Imagine a world / universe where nobody has any idea what SI is. Would you be prepared to speak to them, all their scientists, empiricists and thinkers and say that "all your knowledge is purely accidental, you unfortunately have absolutely no methods for dete... (read more)

0DanielLC
You can have more to it than the complexity penalty, but you need a complexity penalty. The number of possibilities increases exponentially with the complexity. If the probability didn't go down faster, the total probability would be infinite. But that's impossible. It has to add to one.
gedymin10

Let me clarify my question. Why do you and iarwain1 think there are absolutely no other methods that can be used to arrive at the truth, even if they are sub-optimal ones?

6Plasmon
The prior distribution over hypotheses is distribution over programs, which are bit strings, which are integers. The distribution must be normalizable (its sum over all hypotheses must be 1). All distributions on the integers go to 0 for large integers, which corresponds to having lower probability for longer / more complex programs. Thus, all prior distributions over hypotheses have a complexity penalty. You could conceivably use a criterion like "pick the simplest program that is longer than 100 bits" or "pick the simplest program that starts with 101101", or things like that, but I don't think you can get rid of the complexity penalty altogether.
gedymin10

Why can't there be other criteria to prefer some theories over other theories, besides simplicity?

1Plasmon
Solomonoff induction justifies this : optimal induction uses a prior which weights hypotheses by their simplicity.
gedymin00

Perhaps you can comment this opinion that "simpler models are always more likely" is false: http://www2.denizyuret.com/ref/domingos/www.cs.washington.edu/homes/pedrod/papers/dmkd99.pdf

0passive_fist
That paper doesn't seem to be arguing against Occam's razor. Rather it seems to be making the more specific point that model complexity on training data doesn't necessarily mean worse generalization error. I didn't read through the whole article so I can't say if the arguments make sense, but it seems that if you follow the procedure of updating your posteriors as new data arrives, the point is moot. Besides, the complexity prior framework doesn't make that claim at all.
gedymin50

Perhaps one could say that an agent in the sense that matters for this discussion is something with a personal identity, a notion of self (in a very loose sense).

Intuitively, it seems that tool AIs are safer because they are much more transparent. When I run a modern general purpose constraint-solver tool, I'm pretty sure that no AI agent will emerge during the search process. When I pause the tool somewhere in the middle of the search and examine its state, I can predict exactly what the next steps are going to be - even though I can hardly predict the ul... (read more)

gedymin40

See this discussion in The Best Textbooks on Every Subject

I agree that the first few chapters of Jaynes are illuminating, haven't tried to read further. Bayesian Data Analysis by Gelman feels much more practical at least for what I personally need (a reference book for statistical techniques).

The general pre-requisites are actually spelled out in the introduction of Jayne's Probability Theory. Emphasis mine.

The following material is addressed to readers who are already familiar with applied mathematics at the advanced undergraduate level or preferably hi

... (read more)
4selylindi
In working through the text, I have found that my undergraduate engineering degree and mathematics minor would not have been sufficient to understand the details of Jaynes' arguments, following the derivations and solving the problems. I took some graduate courses in math and statistics, and more importantly I've picked up a smattering of many fields of math after my formal education, and these plus Google have sufficed. Be advised that there are errors (typographical, mathematical, rhetorical) in the text that can be confusing if you try to follow Jaynes' arguments exactly. Furthermore, it is most definitely written in a blustering manner (to bully his colleagues and others who learned frequentist statistics) rather than in an educational manner (to teach someone statistics for the first time). So if you want to use the text to learn the subject matter, I strongly recommend you take the denser parts slowly and invent problems based on them for yourself to solve. I find it impossible not to constantly sense in Jaynes' tone, and especially in his many digressions propounding his philosophies of various things, the same cantankerous old-man attitude that I encounter most often in cranks. The difference is that Jaynes is not a crackpot; whether by wisdom or luck, the subject matter that became his cranky obsession is exquisitely useful for remaining sane.
3buybuydandavis
Good quote. But I would have bolded That's where Jaynes shines. Many mathematical subjects are treated axiomatically. Jaynes instead starts from the basic problem of representing uncertainty. Churning out the implications of axioms is a very different mindset than "I have data, what can I conclude from it?" I think this is true as well.
5Capla
I don't know what that means. Calculus? Analysis? Linear algebra? Matrices? Non-euclidean geometry?
gedymin90

Scott Aaronson has formulated it in a similar way (quoted from here):

whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.

Of course, even if Q′ is solved, centuries later philosophers might still be debating the exac

... (read more)
2Nikario
Thank you for the reference. I am not sure if Aaronson and I would agree. After all, depending on the situation, a philosopher of the kind I am talking about could claim that whatever progress has been made by answering the quesion Q' also allows us to know the answer to the question Q (maybe because they are really the same question), or at least to get closer to it, instead of simply saying that Q does not have an answer. I think Protagoras' example of the question about whales being fish or not would make a good example of the former case.
gedymin20

In human society and at the highest scale, we solve the agent-principal problem by separation of powers - legislative, executive, and judiciary powers of state typically are divided in independent branches. This naturally leads to a categorization of AI-capabilities:

  • AI with legislative power (the power to make new rules)

  • AI with with high-level executive power (the power to make decisions)

  • AI with with low-level executive power (to carry out orders)

  • AI with a rule-enforcing power

  • AI with a power to create new knowledge / make suggestions for decision

... (read more)
gedymin40

I don't know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.

The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is no... (read more)

0V_V
If each simulated "neuron" is just a linear threshold unit, as described by the paper, using a whole process to run it and exchange messages by UDP looks like a terribly wasteful architecture. Maybe the author wants to eventually implement a computationally expensive biologically accurate neuron model, but still I don't see the point of this architecture, as even if the individual neurons were biologically accurate, the overall simulation wouldn't, due to the non-deterministc delays and packet lossess introduced by UDP messaging. I'm unimpressed.
gedymin50

It would be interesting to see more examples of modern-day non-superintelligent domain-specific analogues of genies, sovereigns and oracles, and to look at their risks and failure modes. Admittedly, this is only an inductive evidence that does not take into account the qualitative leap between them and superintelligence, but it may be better than nothing. Here are some quick ideas (do you agree with the classification?):

  • Oracles - pocket calculators (Bostrom's example); Google search engine; decision support systems.

  • Genies - industrial robots; GPS drivi

... (read more)
0Houshalter
But everything looks bad if you just measure the failures. I'm sure if they lost money on net people would stop using them.
1Larks
I think a better comparison would be with old-fashioned open-outcry pits. These were inefficient and failed frequently in opaque ways. Going electronic has made errors less frequent but also more noticeable, which means we under-appreciate the improvement.
gedymin10

It's ok, as long as the talking is done in sufficiently rigorous manner. By an analogy, a lot of discoveries in theoretical physics have been made long before they could be experimentally supported. Theoretical CS also has good track record here, for example, the first notable quantum algorithms were discovered long before the first notable quantum computers were built. Furthermore, the theory of computability mostly talks about the uncomputable (computations that cannot be realized and devices that cannot be built in this universe), so has next to no prac... (read more)

4William_S
To have rigorous discussion, one thing we need is clear models of the thing that we are talking about (ie, for computability, we can talk about Turing machines, or specific models of quantum computers). The level of discussion in Superintelligence still isn't at the level where the mental models are fully specified, which might be where disagreement in this discussion is coming from. I think for my mental model I'm using something like the classic tree search based chess playing AI, but with a bunch of unspecified optimizations that let it do useful search in large space of possible actions (and the ability to reason about and modify it's own source code). But it's hard to be sure that I'm not sneaking in some anthropomorphism into my model, which in this case is likely to lead one quickly astray.
gedymin20

To be honest, I initially had trouble understanding your use of "oversight" and had to look up the word in a dictionary. Talking about the different levels of executive power given to AI agents would make more sense to me.

0diegocaleiro
same here.
gedymin10

I agree. For example, this page says that: "in order to provide a convincing case for epigenetic inheritance, an epigenetic change must be observed in the 4th generation. "

So I wonder why they only tested three generations. Since F1 females are already born with the reproductive cells from which F2 will grow, the organism of a F0 exposes both of these future generations to itself and its environment. That some information exchange takes place there is not that surprising, but the effect may be completely lost in F3 generation.

gedymin10

I've always thought of the MU hypothesis as a derivative of Plato's theory of forms, expressed in a modern way.

gedymin00

This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent.

I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.)

Some arguments have already been mentioned:

  • Tool AI or friendly AI without executive power cannot stop the world f
... (read more)
gedymin110

Making your mental contents look innocuous while maintaining their semantic content sounds potentially very hard

Even humans are capable of producing content (e.g. program code) where the real meaning is obfuscated. For some entertainment, try to look at this Python script in Stack Exchange Programming puzzles, and try to guess what it really does. (The answer is here.)

gedymin00

I couldn't even have a slice of pizza or an ice cream cone

Slippery slope, plain and simple. http://xkcd.com/1332/

Reducing mean consumption does not imply never eating ice cream.

gedymin00

You should have started by describing your interpretation of what the word "flourish" means. I don't think it's a standard one (any links to prove the opposite?). For now this thread is going nowhere because of disagreements on definitions.

4Salemicus
One of the things I hate most about this website is the people who love to claim that ordinary usages of English words are somehow non-standard or obscure. Do you see anything about "a life worth living"?
gedymin10

Two objections for these calculations - first, they do not take into account the inherent inefficiency of meat production (farm animals only convert a few percent of the energy in their food to consumable products), its contribution to global carbon emission and pollution. Second, they do not take into account the animals displaced and harmed by the indirect effects of meat production. It requires larger areas for farmlands than vegetarian or seafood based diets would.

0DanielLC
And where it gets really interesting is when you wonder if wild animals' lives are worth living. It's entirely possible that it's good to eat meat because it prevents more suffering from crowding out wild animals than it causes to the animals being farmed.
gedymin110

chickens flourish

Not many vegetarians would agree. Is farm chicken life is worth living? Does the large number of farm chickens really have net positive effect on animal wellbeing?

Animals that aren't useful

What about the recreational value of wild animals?

0Salemicus
I have no idea what that question even means. I don't want to save the Bengal tiger because I think it has a "life worth living" but because I want the species to flourish. But to the extent that you are concerned that battery chickens have negative lives, why become a vegetarian? Eat free range meat. Or eat only hunted meat. And why make a fuss about trace amounts of meat products in your cheese or whatever? Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living and also cash out that concern by a dietary purity ritual? Were I a cynic, I might even think that the religious-seeming ritual were the whole point, and the elaborate epicyclical theology built around it a mere after-the-fact justification.
gedymin10

The practically relevant philosophical question is not "cam science understand consciousness?", but "what can infer from observing the correlates of consciousness, or from observing their absence?". This is the question that for example anesthesiologists have to deal with on a daily basis.

When formulated as this, the problem is really not that different from other scientific problems where causality must be detected. Detecting causal relations is famously hard - but it's not impossible. (We're reasonably certain, for example, that smoki... (read more)

2Capla
This only solves half the problem. If the AI has no motivation to say that it is conscious, we have no reason to think that it will. We would assume that both copies were non-conscious, because it had no motivation to convince us otherwise. I suppose what we need is a test under which an AI has motivation to declare that it is conscious iff it acctully is conscious. Does anyone have any idea for how to actually design such a test?
2Capla
We can identify cancer and make a distinction between cancer and the absence of cancer. We might be wrong sometimes, but an autopsy is pretty reliable, at least after the fact. The same cannot be said of conciseness, since it is in nature (NOT IN CAUSE) non-physical. I realize that I need demonstrate this. That may take some time to write up.
gedymin60

Your article feels too abstract to really engage the reader. I would start with a surprise element (ok, you do this to some extent); have at least one practical anecdote; include concrete and practical conclusions (what life lessons follow from what the reader has learned?).

Worse, I feel that your article might in fact lead to several misconceptions about dual process theory. (At least some of the stuff does not match with my own beliefs. Should I update?)

First, you make a link between System 1 and emotions. But System 1 is still a cognitive system. It's h... (read more)

1Gleb_Tsipursky
Appreciate the feedback. I thought I provided concrete examples of life lessons in the article, namely "If we know about how our minds work, we can be intentional about influencing our own thinking and feeling patterns. We can evaluate reality more clearly, make better decisions, and improve our ability to achieve goals, thus gaining greater agency." Are you suggesting some more specific lessons? If so, can you give some examples of what you mean? Agreed on dual process theory being a useful heuristic. So are most ways of thinking about the brain :-) Here is a more complex piece I wrote on this topic, let me know your thoughts about it. I'm confused by your criticism of my linkage of System 1 to emotions. You say you got most of your knowledge of dual process theory from Kahneman's book. Kahneman's book states, and I quote, "System 1 is fast, intuitive, and emotional" - it is even described this way on the publisher's website, making it clear that this framing is a key component of what Kahneman sought to convey. So can you clarify why you have an issue with me making a link between System 1 and emotions, when Kahneman himself stated this.
gedymin10

Example of a mathematical fact: a formula for calculating correlation coefficient. Example of a statistical intuition: knowing when to conclude that close-to-zero correlation implies independence. (To see the problem, see this picture for some datasets in which variables are uncorrelated, but not independent.)

4Lumifer
Not sure why are you calling this "intuition". Understanding that Pearson correlation attempts to measure a linear relationship and many relationships are not linear is just statistical knowledge, only a bit higher level than knowing the formula.
gedymin10

Be careful here. Statistical intuition does not come naturally to humans - Kahneman and others have written extensively about this. Learning some mathematical facts (relatively simple to do) without learning the correct statistical intuitions (hard to do) may well have negative utility. Unjustified self confidence is an obvious outcome.

2Capla
Can you elaborate? What is the difference between "mathematical facts" and "statistical intuitions"? Can you give an example of each?
Load More