If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
90 comments, sorted by Click to highlight new comments since: Today at 11:55 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

[-][anonymous]8y110

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all b

... (read more)

Anybody have recommendations of a site with good summaries of the best/most actionable parts from self-help books? I've found Derek Sivers' book summaries useful recently and am looking for similar resources. I find that most self-help books are 10 times as long as they really need to be, so these summaries are really nice, and let me know whether it may be worth it to read the whole book.

4ChristianKl8y
I frequently hear people saying that self-help books are too long but I don't think that's really true. Changing deep patterns about how to deal with situations is seldomly made by reading a short summary of a position.
0Viliam7y
Self-help authors keep writing longer books, self-help customers keep learning how to read faster or switch to reading summaries on third-party websites...
4ChristianKl7y
Books get written for different purposes and used by different people for different reasons. At the same time books get also read by different people for different reasons. There's a constituency that reads self-help books as insight porn but there are other people who want to delve deep. I remember a person who read Tony Robbins 500 page book twenty times and every time he read it he discovered something something new.

Music to be resurrected to?

Assume that you are going to die, and some years later, be brought back to life. You have the opportunity to request, ahead of time, some of the details of the environment you will wake up in. What criteria would you use to select those details; and which particular details would meet those criteria?

For example, you might wish a piece of music to be played that is highly unlikely to be played in your hearing in any other circumstances, and is extremely recognizable, allowing you the opportunity to start psychologically dealing wi... (read more)

1username28y
Nicolas Jaar - Space is only Noise which start around the time I regain sound perception.
1Manfred8y
Mahler's 2nd symphony, for reasons including the obvious.
0ThoughtSpeed8y
I think my go-to here would be Low of Solipsism from Death Note. As an aspiring villain being resurrected, I can't think of anything more dastardly.
0MrMind8y
That's interesting, you think of yourself as an aspiring villain? What does that entail?
0[anonymous]8y
"Everything in Its Right Place" by Radiohead would capture the moment well; it's soothing yet disorienting, and a tad ominous.

I was at the vet a while back; one of my dogs wasn't well (she's better now). The vet took her back, and after waiting for a few minutes, the vet came back with her.

Apparently there were two possible diagnosis: let's call them x and y, as the specifics aren't important for this anecdote.

The vet specifies that, based on the tests she's run, she cannot tell which diagnosis is accurate.

So I ask the vet: which diagnosis has the higher base rate among dogs of my dog's age and breed?

The vet gives me a funny look.

I rephrase: about how many dogs of my dog's breed... (read more)

9Houshalter8y
"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.
1g_pepper8y
Mightn't the vet have already factored the base rate in? Suppose x is the more common disease, but y is more strongly indicated by the diagnostics. In such a case it seems like the vet could be justified in saying that she cannot tell which diagnosis is accurate. For you to then infer that the dog most likely has x just because x is the more common disease would be putting undue weight on the Bayesian priors.
2WalterL8y
So, it seems like there could be 2 things going on here: 1: Maybe, in your Vet's mind, she is telling you "We can't tell if this is A or B", and you are asking "But which is it?", and by refusing to answer she is doubling down on the whole "We don't know A or B" situation. Like, I know what you mean is what you actually said, but normal people don't say that, and the Vet is trying to reiterate that you do not know which of A or B this is. She is trying to avoid saying "mostly A", and you saying "ok, treat A", and then the dog dies of B, and you are like "You said A, you fraud, I'm suing you!". 2: The vet honestly doesn't know the answer to your question. She is a person who executes the procedures in his/her manuals, not a person who follows the news about every animal's frequent ailments. In her world if an animal shows A you do X, if an animal shows B you do Y. Your question is outside of her realm of curiosity. As far as another way to phrase this, I'd go with "Well, which do you think it is, A or B?". The vet's answer ought to be informed by her experience, even if it isn't explicitly phrased as "well, mostly this is what dogs suffer from". If she reiterates that there is no way to know, I'd figure this was a first case, CYA situation, and stress that I wouldn't be mad if she was wrong.
2Brillyant8y
I'd suggest this is likely. Assuming both ailments are relatively common and not obviously known to be rare, I'd bet the vet just doesn't know the data necessary to discuss base rates in a meaningful way that would help determine X or Y. Side note: My experience is that sometime the tests needed to help narrow down illnesses in animals are prohibitively expensive.
0ChristianKl8y
A straightforward question would be: "What's the probability for diagnosis A and what's the probability for diagnosis B?". Unfortunately you are likely out of luck because your vet doesn't know basic statistics to give you a decent answer.

Has Sam Harris stated his opinion on the orthogonality thesis anywhere?

4Lightwave8y
He's writing an AI book together with Eliezer, so I assume he's on board with it.
2ThoughtSpeed8y
Is that for real or are you kidding? Can you link to it?
3Lightwave8y
He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think. Also Sam Harris recently did a TED talk on AI, it's now up.

Who are the current moderators?

6ChristianKl8y
I think Elo and Nancy have moderator rights. Various older people who don't frequent the website like EY also have moderator rights.

Continuing my catnip research, I'm preparing to run a survey on gwern.net & Mechanical Turk about catnip responses. I have a draft survey done and would appreciate any feedback about brokenness or confusing questions: https://docs.google.com/forms/d/e/1FAIpQLSeT3GIg-pSwzDFAfNaqE-MzfJEtD0HghN_Vma68OZJtz1Pztg/viewform

0gwern7y
OK, no complaints so far, so I'm just going to launch it. Consider the survey now live. Did I mention that there will be cake?
9Elo7y
cat weight might be relevant, cat current age, cat body shape (fat/skinny), description of cat's response to catnip,
5Elo7y
I am no expert, but I wonder if you could run a monte-carlo on your expected responses. Do the questions you ask give you enough information to yield results? Just not sure if your questions are honing correctly. Chances are there are people that know better than me.
0gwern7y
If I get at least 100 responses, then that will help narrow down the primary question of overall catnip response rate adequately in combination with the existing meta-analysis. I expect to get at least that many, and in the worst case I do not, I will simply buy the survey responses on Mechanical Turk. The secondary question, Japanese/Australian catnip rates vs the rest of the world, I do not expect to get enough responses since the power analysis of the 60% vs 90% (the current average vs Japanese estimates) says I need at least 33 Japanese respondents for the basic comparison; however, Mechanical Turk allows you to limit workers by country, so my plan is to, once I see how many responses I get to the regular survey, launch country-limited surveys to get the necessary sample size. I can get ~165 survey responses with a decent per-worker reward for ~\$108, so split over Japan/Korea/Australia, that ought to be adequate for the cross-country comparisons. (Japan, because that's where the anomaly is; Korea, to see if the anomaly might be due to a bottleneck in the transmission of cats from Korea to Japan back in 600-1000 CE; Australia, because a guy on Twitter told me Australian cats have very high catnip response rates; and I hopefully will get enough American/etc country responses to not need to pay for more Turk samples from other countries.) Of course, if the results are ambiguous, I will simply collect more data, as I'm under no time limits or anything. For the tertiary question, response rates to silvervine/etc, I am not sure that it is feasible to do surveys on them. There is not much mention of them online compared to catnip, and they can be hard to get. My best guess is that of the cat owners who have used catnip, <5% of them have ever tried anything else, in which case even if I get 200 responses, I'll only have 25 responses covering the others, which will give very imprecise estimates and not allow for any sort of modeling of response rates conditional on be
0Lumifer7y
Oh-oh...
[-][anonymous]7y20

I feel the onset of hypomania. Please bear with me if I post dumb stuff in the near future.

0[anonymous]7y
I'm going to contain anything I post to this thread. Just incase it's nonsense. I was just thinking of asking: Is it rational to 'go to Belgium' as they say - to commit suicide as a preventative measure to avoid suffering?
0MrMind7y
Only in very extreme case. Have you looked up on every alternatives?
0[anonymous]7y
I suppose I'm up to date on the alternatives. New alternatives pop up every so often but it's pretty frustrating tracking depression research, and opportunities for short hedonistic bliss that end in death.
0Dagon7y
I suspect there are cases where a perfectly rational, knowledgeable agent could prefer the suffering of death over the suffering of continued life. Agents with less calculating power and with less predictive power over their possible futures (say, for instance, humans) should have an extremely low prior about this, and it's hard to imagine the evidence that would bump it into the positive.
2MrMind7y
The problem with depression is that it skews your entire ability to think clearly and rationally about the future. You're no longer "a rational agent", but "a depressed agent", and it's really bad. From an outside view, of course only very extreme pain or the certainty of inevitable decline are worth the catastrophic cost of death, but from the pov of a depressed person, all future is bad, black and meaningless, and death seems often the natural way up.
2Dagon7y
Absolutely! Depression changes one's priors and one's perception of evidence, making a depressed agent even further from rational than non-depressed humans (who are even so pretty far from purely rational). That said, all agents must make choices - that's why we use the term "agent". And even depressed agents can analyze their options using the tools of rationality, and (I hope) make better choices by doing so. It does require more care and use of the outside view to somewhat correct for depression's compromised perceptions. Also, I'm very unsure what the threshold is where an agent would be better off abandoning attempts to rationally calculate and just accept their group's deontological rules. It's conceivable that if you don't have strong outside evidence that you're top-few-percent of consequentialist predictors of action, you should follow rules rather than making decisions based on expected results. I don't personally like that idea, and it doesn't stop me ignoring rules I don't like, but I acknowledge that I'm probably wrong in that. Specifically, "Suicide: don't do it" seems like a rule to give a lot of weight to, as the times you're most likely tempted are the times you're estimating the future badly, and those are the times you should give the most weight to rules rather than rationalist calculations.

In the last year, someone mentioned a workout book on the #lesswrong irc channel.I want to start exercising in my room and that book seemed, at the time, the best place to start for me so I am looking for it.

Help with finding the book or alternatives appreciated. Here's what I remember about it:

  • the author is someone who server time in jail or is currently serving at the moment
  • the person that talked about the book said that the book empathizes on the idea of keeping the body strong and healthy without the need of weights
  • the exercises use limited space

I can't remember more right now but I will edit the post if I do.

99eB18y
I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine. 1. It's free. 2. It has videos for every exercise. 3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength. 4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness. The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.
2MrMind8y
This is awesome, thank you!
4Tommi_Pajala8y
Sounds like Convict Conditioning to me. I haven't read it myself, but some friends have praised the book and the exercises included.
5MrMind8y
I've read it, still practice it and I recommend it. The only piece of 'equipment' you'll need is a horizontal bar to do pullups (a branch or anything that supports your weight will work just as well).

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

4pcm8y
See ontological crisis for an idea of why it might be hard to preserve a value function.
2scarcegreengrass8y
I thought of another idea. If the AI's utility function includes time discounting (like human util functions do), it might change its future utility function. Meddler: "If you commit to adopting modified utility function X in 100 years, then i'll give you this room full of computing hardware as a gift." AI: "Deal. I only really care about this century anyway." Then the AI (assuming it has this ability) sets up an irreversible delayed command to overwrite its utility function 100 years from now.
2scarcegreengrass8y
Speaking contemplatively rather than rigorously: In theory, couldn't an AI with a broken or extremely difficult utility function decide to tweak it to a similar but more achievable set of goals? Something like ... its original utility function is "First goal: Ensure that, at noon every day, -1 * -1 = -1. Secondary goal: Promote the welfare of goats." The AI might struggle with the first (impossible) task for a while, then reluctantly modify its code to delete the first goal and remove itself from the obligation to do pointless work. The AI would be okay with this change because it would produce more total utility under both functions. Now, i know that one might define 'utility function' as a description of the program's tendencies, rather than as a piece of code ... but i have a hunch that something like the above self-modification could happen with some architectures.
1WalterL8y
On the one hand, there is no magical field that tells a code file whether the modifications coming into it are from me (human programmer) or the AI whose values that code file is. So, of course, if an AI can modify a text file, it can modify its source. On the other hand, most likely the top goal on that value system is a fancy version of "I shall double never modify my value system", so it shouldn't do it.
1TheAncientGeek8y
Is it possible for a natrual agent? If so, why should it be impossible for an artifical agent? Are you thinking that it would be impossible to code in software, for agetns if any intelligence? Or are you saying sufficiently intelligent agents would be able and motivated resist any accidental or deliberate changes? With regard to the latter question, note that value stability under self improvement is far from a give..the Lobian obstacel applies to all intelligences...the carrot is always in front of the donkey! https://intelligence.org/files/TilingAgentsDraft.pdf
1UmamiSalami8y
See Omohundro's paper on convergent instrumental drives
0username28y
Depends entirely on the agent.

Game theory research reveals fragility of common resources

"In many applications, people decide how much of a resource to use, and they know that if they use a certain amount and if others use a certain amount they are going to get some return, but at the risk that the resource is going to fail,"

https://www.sciencedaily.com/releases/2016/09/160929143603.htm

http://www.sciencedirect.com/science/article/pii/S0899825616300458

0ChristianKl7y
under certain theoretical conditions

I just thought of this 'cute' question and not sure how to answer it.

The sample space of an empirical statement is True or False. Then, given an empirical statement, one would then assign a certain prior probability 0<p<1 to TRUE and one minus that to FALSE. One would not assign a p=1 or p=0 because it wouldn't allow believe updating.

For example: Santa Claus is real.

I suppose most people in LW will assign a very small p to that statement, but not zero. Now my question is, what is the prior probability value for the following statement:

Prior probability cannot be set to 1.

3ChristianKl7y
Prior probability cannot be set to 1. is itself not an empiric statement. It's a question about modelling.
3Gram_Stone7y
Actual numbers are never easy to come up with in situations like these, but some of the uncertainty is in whether or not priors of zero or one are bad, and some of it's in the logical consequences of Bayes' Theorem with priors of zero or one. The first component doesn't seem especially different from other kinds of moral uncertainty, and the second component doesn't seem especially different from other kinds of uncertainty about intuitively obvious mathematical facts, like that described in How to Convince Me That 2 + 2 = 3.

Trait Entitlement: A Cognitive-Personality Source of Vulnerability to Psychological Distress.

"First, exaggerated expectations, notions of the self as special, and inflated deservingness associated with trait entitlement present the individual with a continual vulnerability to unmet expectations. Second, entitled individuals are likely to interpret these unmet expectations in ways that foster disappointment, ego threat, and a sense of perceived injustice, all of which may lead to psychological distress indicators such as dissatisfaction across multiple... (read more)

'Tis a shame that an event like tonight's debate won't, and ostensibly never would have, received any direct coverage/discussion on LW, or any other rationality sites of which I am aware.

I know (I know, I know...) politics is the mind killer, but tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event, and LW is busy discussing base rates at the vet and LPTs for getting fit given limited square footage.

9Alejandro18y
Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.
5ChristianKl8y
Given that previous US debates results in a LW person writting an annotated version that pointed out every wrong claim made during the debate, why do you think that LW shies away from discussing US debates? Secondly what do you think would "direct coverage" produce? There's no advantage for rational thinking in covering an event like this live. At least I can't imagine this debate going in a way where my actions significantly change based on what happens in the debate and it would be bad if I would gain the information in a week. Direct coverage is an illness of mainstream media. Most important events in the world aren't known when they happen. We have Petrov day. How many newspapers covered the event the next day? Or even in the next month?
4username28y
Is that actually true? I've lived through many US presidential eras, including multiple ones defined by "change." Nothing of consequence really changed. Why should this be any different? (Rhetorical question, please don't reply as the answer would be off-topic.) Consider the possibility that if you want to be effective in your life goals (the point of rationality, no?) then you need to do so from a framework outside the bounds of political thought. Advanced rationalists may use political action as a tool, but not for the search of truth as we care about here. Political commentary has little relevance to the work that we do.
3ChristianKl8y
I don't think nothing of consequence changed for the Iraqi's through the election of Bush.
0username27y
Compare that with Syria under Obama. "Meet the new boss, same as the old boss..."
0Brillyant8y
I'd argue U.S. policy is too important and consequential to require elaboration. "Following politics" can be a waste of time, as it can be as big a reality show circus as the Kardashians. But it seems to me there are productive ways to discuss the election in a rational way. And it seems to me this is a useful way to spend some time and resource.

A thing already known to computer scientists, but still useful to remember: as per Kleene's normal form theorem, a universal Turing machine is a primitive recursive function.
Meaning that if an angel gives you the encoding of a program you only need recursion, and not unbounded search, to run it.

0Pfft8y
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine. I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.
0MrMind7y
Nope. The standard notion of a UTM take the representation of a program and an input, and interprets it. With the caveat that those representations terminate! What you say, that the number given to the UTM is the number of steps for which the machine must run, is not what is asserted by Kleene's theorem, which is about functions of natural numbers: the T relation checks, primitive recursively, the encoding of a program and of an input, which is then fed to the universal interpreter. You do not say to a Turing machine for how much steps you need to run, because once a function is defined on an input, it will run and then stop. The fact that some partial recursive function is undefined for some input is accounted by the unbounded search, but this term is not part of the U or the T function. The Kleene equivalence needs, as you say, unbounded search, but if the T checks, it means that x is the encoding of e and n (a program and its input), and that the function will terminate on that input. No need to say for how much steps to run the function. Indeed, this is true of and evident in any programming language: you give to the interpreter the program and the input, not the number of steps.
0Pfft7y
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That's why T can be primitive recursive.
0MrMind7y
From the page you link: Also from the same page:
0Drahflow8y
A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive. And indeed Kleene's normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.
0MrMind8y
Yes, but the U() and the T() are primitive recursive. Unbounded search is necessary to get the encoding of the program, but not to execute it, that's why I said "if an angel gives you the encoding". The normal form theorem indeed says that any partial recursive function is equivalent to two primitive recursive functions / relations, namely U and T, and one application of unbounded search.
0Drahflow7y
Quoting https://en.wikipedia.org/wiki/Kleene%27s_T_predicate: In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you can check with a primitive recursive function whether you have been lied to.
0MrMind7y
Oh! This point had evaded me: I thought x encoded the program and the input, not just the entire history. So U, instead of executing, just locates the last thing written on tape according to x and repeat it. Well, I'm disappointed... at U and at myself.
0username28y
Why is this useful to remember?
0MrMind8y
Because primitive recursion is quite easy, and so it is quite easy to get a universal Turing machine. Filling that machine with a useful program is another thing entirely, but that's why we have evolution and programmers...
1username28y
Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code... if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn't solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
3entirelyuseless8y
"A few short lines of code..." AIXI is not computable. If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
-2username27y
This is incorrect. AIXI is "not computable" only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That's not what is generally meant by 'computable'. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions. Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that's very different from uncomputability.
3entirelyuseless7y
Sorry, you are simply mistaken here. Go and read more about it before you say anything else.
-2username27y
Okay random person on the internet.
3entirelyuseless7y
If you can't use Google, see here. They even explain exactly why you are mistaken -- because Solomonoff induction is not computable in the first place, so nothing using it can be computable.
0username27y
Taboo the word computable. (If that's not enough of a hint, notice that Solomonoff is "incomputable" only for finite computers, whereas this thread is assuming infinite computational resources.)
2entirelyuseless7y
Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances. In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to "compute" the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable. (If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of "memory" by definition, and the amount of time it takes to execute a program is no part of the formalism. So "computable" and "incomputable" in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
0username27y
Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.
2entirelyuseless7y
Ok, let's go back and review this conversation. You started out by saying, in essence, that general AI is just a matter of having good enough hardware. You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not. You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously. You respond now by saying, "Solomonoff induction is not in fact infinite..." Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
0username27y
Ok this is where the misunderstanding happened. What I said was "if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions." Truly infinite compute resources will never exist. So that's not a claim about "we just need better hardware" but rather "if we had magic oracle pixie dust, it'd be easy." The rest I am uninterested in debating further.
0entirelyuseless7y
That's fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.
-3username27y
Sorry, no, you seem to have completely missed the minimax aspect of the problem -- an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.
0entirelyuseless7y
I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world. Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.