All of estimator's Comments + Replies

Is this acutally a bad thing? In both cases, Bob and Sally not only succeeded in their initial goals, but also made some extra progress.

Also, fictional evidence. It is not implausible to imagine a scenario when Bob does all the same things and learns French, German and then fails on e.g. Spanish. The same thing for Sally.

In general, if you have tried some strategy and succeeded, it does make sense to go ahead and try it on other problems (until it finally stops working). If you have invented e.g. a new machine learning method to solve a specific practical ... (read more)

0SquirrelInHell
A valid point. There's a crucial difference here. Your machine learning method does not get tired, or bored. It does not say "to hell with this, I've had enough". The stories point out the difference between having a successful method to do something, and having motivation to do it.

Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that?

And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.

I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy.

Cryonics, at least, might work.

1Tem42
DNA and brain scans are far from perfect -- you will only get someone "you-ish". In the absence of a solution, you can at least get a bit more you-ness cheaply when the opportunity presents itself. A sufficiently powerful simulation could take all the possible yous indicated by DNA and scans, and see which yous are consistent with the sequences you have saved through screen shots and ect. It's not perfect, but it should be a little bit better. Even better would be if you did something more complex and less externally guided than web-browsing... write a book, a blog, or a song. Also, save your LessWrong username in that file!
1D_Malik
You don't need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I'd say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.

Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.

Also a few words about concepts. You can explain a concept using other con... (read more)

0[anonymous]
Ok, fair enough. So, what you're really saying is that the aliens lack some indefinable trait that the humans consider "moral", and the humans lack a definable trait that the aliens consider moral. This is a common scifi scenario, explored elsewhere on the site. See EG three worlds colide. Your specific scenario seems to me to involve a highly improbable scenario where humans are considered immoral, but somehow miraculously they created something that is considered moral, and the response is to hide from the inferior immoral civilization.

Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).

Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we bu... (read more)

2Viliam
"can be programmed to" is not the same thing as intelligence. It requires external intelligence to program it. Using the same pattern, I could say that atoms are intelligent (and maybe sort-of conscious), because for almost any human task, they can be rebuilt into something that does it.

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

estimator-30

There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.

If you don't already understand what I mean, small chance that I would be able to explain.

As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?

And no doubt that scenarios your mention are more plausible.

0Manfred
Are (modern) computers intelligent but not conscious, by your lights? If so, then there's a very important thing you might provide some insight into, which is what sort of observations humans could make of an alien race, that would lead to us thinking that they're intelligent but not conscious.
0[anonymous]
If you dont know what you're talking about when you say consciousness, your premise becomes incoherent.

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

2Vaniver
Basically, the hierarchical control model of intelligence, which sees 'intelligence' as trying to maintain some perception at some reference level by actuating the environment. (Longer explanation here.) If you have multiple control systems, and they have different reference levels, then they will get into 'conflict', much like a tug of war. That is, simple intelligence looks like it leads to rivalry rather than cooperation by default, and so valuing intelligence rather than alignment seems weird; there's not a clear path that leads from nothing to there.

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

It's extremely hard to ban the research worldwide, and then it's extremely hard to enforce such decision.

Firstly, you'll have to convince all the world's governments (btw, there are >200) to pass such laws.

Then, you'll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.

And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individu... (read more)

Why do you prefer offline conversations to online?

Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:

  • You don't have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.

  • You have more time to think before reply, if you need to. For example, you can support your arguments with relevant

... (read more)
1Douglas_Knight
Offline conversations are higher bandwidth. And not just because they are lower latency.
2Viliam
They satisfy me emotionally on a level online conversations don't. Something in my brain generates a feeling of "a tribe" more intensely. An offline conversation has a potential to instigate other offline activities. (As an example of what really happened: going together to a gym and having a lecture on "rational" exercising.) But I agree with what you wrote; online activities also have their advantages. It just seems to me we have too much online, too little offline (at least those who don't live in the Bay Area).
estimator190

I have noticed that many people here want LW resurrection for the sake of LW resurrection.

But why do you want it in the first place?

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

After all, if you think that Eliezer's writing constitute most of LW value, and Eliezer doesn't write here anymore, maybe the wise decision is to let it decay.

Beware the lost purposes.

6Viliam
Emotionally -- for the feeling that something new and great is happening here, and I can see it growing. Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact. Okay, what exactly are the "great things" I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences? When Eliezer was writing the Sequences, merely the fact that "there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom" seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so. Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works. Internet vs real li

What is the point of having separated Open Threads and Stupid Questions threads, instead of allowing "stupid questions" in OTs and making OTs more frequent?

3tut
You are allowed to ask in the open thread. I don't think having it more often would help. The SQ thread is for things that you are embarrassed or afraid to ask elsewhere. Apparently some people have questions that they didn't bring up before the first stupid questions thread.

The advantage to having Stupid Question threads is that it's easier to make it clear that that the questions should be treated kindly.

And the effort required to earn the money to buy the ring is also wasted.

No, it's not. You have produced (hopefully) valuable goods or services; why they are wasted, from the viewpoint of society?

Such cost calculations are wildly overestimated.

Suppose you buy a luxury item, like a golden ring with brilliants. You pay a lot of money, but your money isn't going to disappear; it is redistributed between traders, jewelers, miners, etc. The only thing that's lost is the total effort required to produce that ring, which often costs lesser by an order of magnitude. And if the item you buy is actually useful, the wasted effort is even lower.

The cost of having kids is so high for you, because you will likely raise well-educated children with high intellig... (read more)

0RomeoStevens
You chose the worst possible example. Extreme margins mask the issue.
0Good_Burning_Plastic
At equilibrium, the price equals the marginal cost; sure, it is more than the average cost, but I can't see why the latter is relevant. And the effort required to earn the money to buy the ring is also wasted.
2IlyaShpitser
Sounds like a mistake a native Russian speaker would make :).
estimator100

Well, everyone will likely die sooner or later, even post-Singularity (provided that it will happen, which isn't quite a solid fact).

Anyway, I think that any morality system that proclaims unethical all and every birth happened so far is inadequate.

8Sarunas
Yes, if humanity actually started to follow such system, it would be a human race version of a movie robot getting confused by a logical paradox and exploding out of existence.

That only works if there are few levels of abstraction; I doubt that you can derive how do programs work at the machine codes level based of your knowledge of physics and high-level programming. Sometimes, gears are so small that you can't even see them on your top level big picture, and sometimes just climbing up one level of abstraction takes enormous effort if you don't know in advance how to do it.

I think that you should understand, at least once, how the system works on each level and refresh/deepen that knowledge when you need it.

0Nanashi
The definition of "fundamentals" differs though, depending on how abstract you get. The more layers of abstraction, the more abstract the fundamentals. If my goal is high-level programming, I don't need to know how to write code on bare metal. That's why I advocate breaking things down until you reach the level of triviality for you personally. Most people will find, "writing a for-loop" to be trivial, without having to go farther down the rabbit hole. At a certain point, breaking things down too far actually makes things less trivial.

Read what is a matrix, how to add, multiply and invert them, what is a determinant and what is an eigenvector and that's enough to get you started. There are many algorithms in ML where vectors/matrices are used mostly as a handy notation.

Yes, you will be unable to understand some parts of ML which substantially require linear algebra; yes, understanding ML without linear algebra is harder; yes, you need linear algebra for almost any kind of serious ML research -- but it doesn't mean that you have to spend a few years studying arcane math before you can open a ML textbook.

0Nornagest
Who said anything about a few years? If you paid attention in high school, the linear algebra background you need is at most a few months' worth of work. I was providing a single counterexample, not saying that the full prerequisite list (which, if memory serves, is most of a CS curriculum for your average ML class) is always necessary.

You're right; you have to learn solid background for research. But still, it often makes sense to learn in the reversed order.

estimator-10

Can you unpack "approximation of Solomonoff induction"? Approximation in what sense?

0Houshalter
I walk through each step in the post. You can approximate Turing complete algorithms with logic gates. And NNs are basically just continuous logic gates and can compute the same functions. And there are algorithms like MCMC which can approximate bayesian inference.

In my experience, in math/science prerequisites often can (and should) be ignored, and learned as you actually need them. People who thoroughly follow all the prerequisites often end up bogged down in numerous science fields which have actually weak connection to what they wanted to learn initially, and then get demotivated and drop out of their endeavor. This is a common failure mode.

Like, you need probability theory to do machine learning, but some you are unlikely to encounter some parts of it, and also there are parts of ML which require very little of it. It totally makes sense to start with them.

0btrettel
I'm thinking more specifically than you are. Rather than learning probability theory to understand ML, learn only what you determine to be necessary for what ML applications you are interested in. The concept maps I use are very specific, and they avoid the weak connection problem you mention. (It's worth noting that I develop these as an autodidact, so I don't have to take an entire class to just get a few facts I'm interested in.)
0Nornagest
On the other hand, if you don't have a solid grasp of linear algebra, your ability to do most types of machine learning is seriously impaired. You can learn techniques like e.g. matrix inversions as needed to implement the algorithms you're learning, but if you don't understand how those techniques work in their original context, they become very hard to debug or optimize. Similarly for e.g. cryptography and basic information theory. That's probably more the exception than the rule, though; I sense that the point of most prerequisites in a traditional science curriculum is less to provide skills to build on and more to build habits of rigorous thinking.
0[anonymous]
Can I give a counterexample? I think that way of learning things might help if you only need to apply the higher-level skills as you learned them, but if you need to develop or research those fields yourself, I've found you really do need the background. As in, I have been bitten on the ass by my own choice not to double-major in mathematics in undergrad, thus resulting in my having to start climbing the towers of continuous probability and statistics/ML, abstract algebra, logic, real analysis, category theory, and topology in and after my MSc.

One simple UI improvement for the site: add a link from comments in inbox to that comment in the context of its post; now I have to click twice to get to the post and then scroll down to the comment.

0Adam Zerner
That plus 10,000 other things :)

But these are the things pretty much everybody does while learning languages.

4Nanashi
Well of course they do. Because these things are necessary to learning a language. This is the 20% that's most efficient. By definition someone who puts in 100% of the effort will be doing what I did. The efficiency of this approach revolves around what you don't do. You're excising the 80%. I didn't spend long hours learning katakana, hiragana and kanji. I didn't learn the more complex tenses and conjugations. I didn't spend time on vocabulary words that are highly situational. Contrast this to a typical Japanese textbook.
3[anonymous]
There seem to be two major approaches to learning language. One is to go a language school / course where the teachers, in my experience, teach it like an academic discipline + the usual guess-my-password bullshit, so you get tested and graded on things like grammar, like a test where you need to fill in conjugations / declinations into holes in a text. (Obviously I am talking about languages that have those kinds of things, like Germanic or Romance ones). Case in point: part of my B2 level German exam at the University of Vienna was exactly that kind of hole-filling and it felt really wrong as it has not much to do with commuication, it is a more academic approach. The other approach is to do something like this for a while, but when you get to that basic point where you can say "Jack would have ordered a beer yesterday if he had money on him" ditch it and pretty much learn from immersion. Screw grammar, just read a lot of books, figure out words from the context, and conduct imaginary or real conversations no matter how bad the grammar is. Real people prefer to communicate with people who talk fast, not correct. Talking with someone saying at a normal speed who is talking like "me no want buy house, me want rent house now" is far better than someone who is like "I no... (long pause) do not? want ... (long pause) want to? buy a house, rather... (long pause)... instead? I want to rent it... (long pause) rent one". I used to be that second guy in 2 languages and it sucked. (Now of course you may think "but everybody knows immersion is better it is not even new" yeah apparently that everybody does not include the huge European language school chains like Berlitz and their who knows how many students... )

Also, I'd like to compare your system against common sense reasoning baseline. What do you think are the main differences between your approach and usual approaches to skill learning? What will be the difference in actions?

I'm asking that because that your guide contains quite long a list of recommendations/actions, while many of them are used (probably intuitively/implicitly) by almost any sensible person. Also, some of the recommendations clearly have more impact than others. So, what happens if we apply the Pareto principle to your learning system? Which 20% are the most important? What is at the core of your approach?

0btrettel
One piece of information you can use to determine what is most important is the number of other skills which require a certain skill as a prerequisite. Prerequisites should obviously be learned first, and it makes sense to learn them in order of how many doors they open. This is how I prioritize at the moment if I'm not considering subjective measures of "usefulness". For my learning goals, I've started making concept maps, partly as it helps me understand a subject by understanding how concepts are related, and partly to identify what to learn next as described above. It becomes fairly obvious that I should learn X if I want to learn Y and Z and X is a prerequisite for both.
2Nanashi
As I mentioned in another comment, the difference between this and the "common sense" approach is in what this system does not do. As for what the 20% of this system that gives you the most bang for your buck? That's a good question. Right now my "safe" answer is that it's dependent on the type of skill you're trying to learn. The trouble is that the common threads among all the skills ("Find the 20% of the skill that yields 80% of the results") doesn't have a lot of practical value. Like telling someone that all they need to do to lose weight is eat less and exercise more. Let me think about it some more and I'll get back to you.

I meant something like this.

... take part in routine conversations; write & understand simple written text; make notes & understand most of the general meaning of lectures, meetings, TV programmes and extract basic information from a written document.

2Nanashi
I'll give a more in depth breakdown soon but for now, I'd probably take a similar approach that I took to learning to read Japanese : learn basic sentence structure, learn top 150ish vocabulary words, avoid books written in non-romaji. Practice hearing spoken word by listening to speeches and following their transcriptions. My exception protocol for unrecognized words was to look them up. And for irregular sentence structure, to guess based on context. It worked for watching movies and reading, mostly but as you can tell, yoi kakikomu koto ga dekimasen*. I'd have to do some thinking on the writing part, it would most likely involve sticking to simple sentences. *thats terrible Japanese for "I cannot write well". I think. I hope.
estimator-20

I don't think it's strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results... (read more)

2ike
I'm not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn't identify any. Even repeating the thought experiment with a quantum computer doesn't seem to change my intuition.

That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.

That is one of the reasons why I think binary-consciousness models are likely to be wrong.

There are many differences between brains and computers; they have different structure, different purpose, different properties; I'm pretty confident (>90%) that my computer isn't conscious now, and the consciousness phenomenon may have specific qualities which a... (read more)

2ike
Do you have any of these qualities in mind? It seems strange to reject something because "maybe" it has a quality that distinguishes it from another case. Can you point to any of these details that's relevant?

Nice, but beware reasoning after you've written the bottom line.

As for the actual content, I basically fail to see its area of applicability. For sufficiently complex skills, like say, math, languages or football decision-trees & howto-guides approach will likely fail as too shallow; for isolated skills like changing a tire complex learning approaches are an overkill -- just google it and follow the instructions. Can you elaborate languages example further? Because, you know, learning a bunch of phrases from phrasebook to be able to say a few words in ... (read more)

2Nanashi
Also, when you say "intermediate level language knowledge", what exactly do you mean? One of the key steps is defining exactly what you want to accomplish and why. I don't want to create a whole write-up, only to realize that you and I have two different definitions of "intermediate level language knowledge". So if you'd tell me the "what" and the "why", I'll do the rest.
2Nanashi
Basketball is an example. I'm about to head home so I'll do the ultra-abbreviated TL;DR version: 1. Goals: Score points, prevent opponent from scoring points. 2. Archetypes: Offense (2-point), Offense (3-point), Defense 3. Process How-To: Googled "how to layup", "how to shoot a 3-pointer", and "how to steal a ball" 3a. Process Failure Points: Missing a shot, getting the ball stolen, missing a pass. 3b. Process Difficulties: Anything involving ball handling or dribbling. Defense. 4. Exception Protocol: On offense: Pass the ball to a better player than myself, or set a pick. On defense: play very close to my opponent. 5a. Avoid anything involving dribbling but not scoring. 5b. Prepare and practice two-point shots. 5c. Focus on getting open for a 3-point shot. Practice consistently shooting from 3-point line. 5. Get better by playing. I would say basketball is fairly complex. One thing I didn't mention in the original post (mainly because it starts to get into the "how do individual people learn") but for me, I don't get good at a competitive skill by competing against people who also suck. By getting good enough to be able to play with people who are actually good, it made it easier for me to learn the advanced part of the game faster. Also, this post has a list of (at least what I think to be) fairly non-trivial skills that I have trained using this method.

OK, suppose I come to you while you're sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you're naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.

You don't assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?

It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.

2ike
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it's still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let's gloss over them for now.) So I'm going to interpret your "remove a neuron" as "remove a memory", and then your question becomes "how many memories can I lose and still be me"? That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though. This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn't mean I can't point to an exact clone and say it's me. Not sure what you mean. Some things change, so it won't be exactly the same. It's still close enough that I'd consider it "me". Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn't it illogical there?

p("your model") < p("my model") < 50% -- that's how I see things :)

Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow "find" your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to yo... (read more)

3ike
I don't think it's meaningful to talk about a "flow" here. Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they're still close enough to call it the same person. What would you say to your thought experiment, if I replace "brain" with "computer", turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it's the same computer? If you make hardware arguments, I'll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of "my OS", then the same should be true for "brains", assuming physicalism.

That's a typo; I mean't that my model doesn't imply continuous time. By the way, does it make sense to call it "my model" if my estimate of the probability of it being true is < 50%?

So, why do I think that consciousness requires continuity?

I guess, you have meant "doesn't require"?

I'd say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.

What is your probability estimate of your model being (mostly) true?

2ike
Fixed. I guess we're even now :) You're criticising other theories based on something you put less then 50% credence in? That's how this all started. More than 90%. If I had a consistent alternative that didn't require anything supernatural, then that would go down.

I've started commenting here recently, but I'm a long time lurker (>1 year). Also, I was speaking about self-help articles in general, not conditional on whether they are posted on LW -- it makes sense, because pretty much anyone can post on LW.

Now I found a somewhat less extreme example of what I think is an OK post on self-help although it doesn't have scientific references, because a) the author told us what actual results he achieved and, more importantly, b) the author explained why he thinks that the advice works in the first place.

Personally, I... (read more)

3Nanashi
That's totally fine, like I said, your post made sense and was consistent with what I've seen. I still don't really think that stating my qualifications would do much. In this context, it still just seems too much like bragging. "I helped build a multi-million dollar company, I compete in barbecue competitions and consistently place in the top 10% of the field and was sponsored by a major barbecue website, was ranked in the top 100 players in the world for a popular collectible card game, learned how to code with no formal education (and used that knowledge wrote a somewhat well-received calibration test, and also write a bunch of boring business platforms), wrote an article about a baseball statistic I co-developed and was published in a publication that's important for people who care about baseball stats, learned how to be a carpenter, at one point was a licensed pharmacy technician, blah blah blah" Even though I'm sure there's a less crass way to phrase it, to me it still sounds exceedingly arrogant. I might be overreacting though. You tell me: if I prefaced my post with that, would you be more or less inclined to take me seriously? I do like the idea of explaining why I think the advice works in the first place. I will start writing something up about that and append it to the original post.
estimator-20

I find a model plausible if it isn't contradicted by evidence and matches my intuitions.

My model doesn't imply discrete time; I don't think I can precisely explain why, because I basically don't know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I'm uncertain of this, too.

Honestly, my best guess is that all these models are wrong.

Now, what arguments cause you to find your model plausible?

3ike
I think your model implies the opposite; did you misunderstand me? (First of all, you didn't mention if you agree with my assessment of the root cause of our disagreement. I'll assume you do, and reply based on that.) So, why do I think that consciousness doesn't require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I'm mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
estimator-20

OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.

What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from -- if it was built as a clone, then, well, it is a clone.

Note that I'm not saying that it's the true model, just that I currently find it more plausible; none of t... (read more)

3ike
I hope you realize that you're just moving the problem into determining which one is "your" room, considering neither room had any of you thinking in it until after one was killed. The root of our disagreement then seems to be this "continuous" insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis. I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that. What arguments/intuitions are causing you to find your model plausible?

So, taking a look at what you actually propose to do, this reduces to a) learn some phrases from the tourist phrasebook and b) learn the rest of the language while c) avoiding high-stakes situations where you need language knowledge. Reminds me of this.

0satt
That may be a bit more snarky than is helpful. Your reduction loses useful information; Nanashi's longer description of the process includes useful, specific procedural details that could otherwise trip people up.
0Nanashi
Yup, pretty much. To quote myself (Incidentally, the link you posted does not work, it's giving me a 404).
estimator100

Articles on such topics are notorious for their average bad quality. Reformulating in Bayesian terms, the prior probability of your statements being true is low, so you should provide some proofs or evidence -- or why I (or anyone) should believe you? Have you actually checked if it works? Have you actually checked if it works for somebody else?

I don't think that personal achievements are a bullet-proof argumentation for such an advice. Still, when I read something like this, I'm pretty sure that it contains valuable information, although it is probably a... (read more)

4Nanashi
That's interesting, I wasn't aware of that reputation. That's good to know and certainly justifies your skepticism. All that said, I think one can still evaluate your point (and in my case, my Less Wrong post) based on its internal logic and how consistent it is with one's own observations, without needing research to back it up. It would be easy enough to dismiss your own post for the very reasons you cited. Consider the following: "In general, people new to a community are notoriously bad at gauging the pulse of said community. To reformulate in Bayesian terms, based on the length of time you've been posting here, the prior probability of your statement being true is low, so shouldn't you provide some proofs or evidence -- or why should I (or anyone) believe you?" But to me, your logic checks out, and is fairly consistent with my own observations (that most self-help publications tend to be garbage), so that shifts the probabilities significantly in your favor. I'm hoping that people will evaluate my own post by similar criteria, rather than immediately dismissing it.
estimator-20

Do you think you won't awaken in a room with no in the envelope?

I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.

Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).

I find this model implausible. Is there any evidence I can update on?

3ike
But this world I described is (or can be) completely deterministic; how can you be uncertain of what will happen? I understand how I can be subjectively uncertain due to self-locating uncertainty, but there should be no possible objective uncertainty in a deterministic world. The only out I see if if you think consciousness requires non-deterministic physical processes. I'm not sure I understand your reasoning here, so I'm not sure. Have you read the Ebborian posts in the quantum sequence? What exactly do you think would happen when someone is cloned? Why would one copy be "real" and the other not? Would there be any way to detect which was real for outsiders?
estimator110

I think that all self-help / "learning to learn" / etc. articles should contain a short summary telling us some reasons to actually believe anything written below. Like references to relevant research, or author's real life achievements, or something. Generally, one shouldn't rely on personal anecdotes; but for self-help, even having a single data point is often too high a standard.

In your article, I couldn't find a single bit of evidence in support of your claims.

2Nanashi
Sure, I could, but would that make you any more likely to accept it? Generally I've found that the more someone expounds on their own credentials, the less credible (and likable) they sound. If my own personal achievements would genuinely make a difference to you personally, then I'd be glad to tell you them. If not, then I don't quite see the point.
estimator-20

I think, the problem with consciousness/qualia discussions is that we don't have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.

No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout.

I mean, one of the copies would be you (and share your qualia), while others are forks of you. That's because I think that a) your consciousness is preserved by the branching process and b) you don't experience livi... (read more)

3ike
In my model, all the copies have qualia. Put another way, clearly there's no way for an outside observer to say about any copy that it doesn't have qualia, so the only possible meaning here would be subjective. However, each copy subjectively thinks itself to have qualia. (If you deny either point, please elaborate.) Given those, I don't see any sense that anyone can say that the qualia "only" goes to a single fork, with the others being "other" people. I agree with a, but I think your consciousness is forked by the branching process. I agree with b, assuming you mean "no one person observes multiple branches after a fork". I don't think those two imply that QL requires look-ahead. What if I rephrased this in one-world terms? I clone you while you're asleep. I put you in two separate rooms. I take two envelopes, one with a yes on it, the other with a no, and put one in each room. Someone else goes into each room, looks at the envelope, then kills you iff it says yes, and wakes you iff it says no. Do you think you won't awaken in a room with no in the envelope? As long as we aren't defining consciousness, I can't really disagree that some plausible definition would make this true. I don't. Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty). There is no "truth" as to which copy they'll end up in.
estimator-10

OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don't see how simultaneously being in different branches makes sense from the qualia viewpoint.

Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.

And no, I'm currently unable to dissolve the hard problem of consciousness.

3ike
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout. Until my brain has any info about what the data is, my consciousness hasn't forked yet. The fact that the info is "out there" in this world is irrelevant; the opposite data is also out there "in this world", as long as I don't know, and both actually exist (although that requirement arguably is also irrelevant to the anthropic math), then I exist in both worlds. In other words, both copies will be "continuations" of me. If one suddenly disappears, then only the other "continues" me. There's a reason I included it. I'm more confident that the outcome will be good with it than without. In particular, if I'm not sleeping when killed, I expect to experience death. But the fact that you think it's not interrupted when sleeping suggests we're using different definitions. If it's because of dreaming, then specify that the person isn't dreaming. The main point is that I won't feel pain upon dying (or in fact, won't feel anything before dying), so putting me under general anesthesia and ensuring the death would be before I begin to feel anything should be enough, in that case. I meant just enough that I could understand what you mean when you claim that consciousness must only go to one path.

At least in math, a paper can actually be verified during peer review.

1DanielLC
Easier said than done. Just because you didn't notice an error in a two hundred page proof doesn't mean there isn't one.

My impression is that inside LW they are usually assumed true, while outside LW they are usually assumed false or highly questionable. Again, I'm not saying that these theories are wrong, but the pattern looks suspicious; almost every LW's non-mainstream belief can be traced back to Eliezer. What a coincidence. One of the possible explanations is the halo effect of the Sequences. Or they are actually underrated outside LW. Or my impressions are distorted.

gwern130

Or my impressions are distorted.

I'm going with distorted.

Take MWI for example; apparently a lot of people are under the impression that LWers must be ~100% MWI fanatics. But the annual surveys report that lukewarm endorsements of MWI as the least bad QM interpretation covers, what, <50% of respondents? And it's not clear to me that LW is even different from mainstream physicists, since the occasional polls of them show MWI keeps becoming more popular. It seems like people overgeneralize from the generally respectful treatment of MWI as a valid altern... (read more)

Can you unpack "optimizing thought processes"? Under some definitions the statement is questionable, under others trivially true.

Also, the articles you've linked to describe techniques that are very popular outside -- so if they are overrated, it isn't a LW-specific mistake.

0passive_fist
I can try to elaborate on the criticisms of the pages I linked. There hasn't been any study of the long-term effects of spaced repetition. There are indications that it may be counter-productive and that it may act as an artifical 'importance inflator' of information, desensitizing the brain's long-term response to new knowledge that is actually important, especially if one is not consciously aware of that. About the pomodoro technique, it's even less researched than spaced repetition and there's very little solid evidence that it works. One thing that seems a bit worrying is that it seems like a 'desperate measure' adopted by people experiencing low productivity, indicating some other problem (depression/burnout etc.) that should be dealt with directly. In these cases pomodoros would make things far worse. It could be said that none of these are criticisms of LW, but just criticisms of these specific techniques that arose outside of LW. However, if one is too eager to adopt and believe in such techniques, it betrays ADS-type thinking as relating to the idea that optimization of thought processes can be done through 'productivity hacks'.
estimator-20

TDT, FAI (esp. CEV), acausal trading, MWI -- regardless whether they are true or not, the level of criticism is lower than one would expect; either because of the Halo effect or ADS.

2Richard_Kennaway
I see these things being discussed here from time to time. I don't see any general booming of them, still less any increasing trend. Eliezer, of course, has boomed MWI quite strongly; but he is no longer here.
estimator-20

I don't have a model which I believe with certainty even provided MWI is true.

I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.

What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your ... (read more)

3ike
I deny that this is meaningful. If there are two copies of me, both "have my consciousness". I fail to see any sense in which my consciousness must move to only one copy. I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I'm dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.) Then many copies of my consciousness will exist, some slowly dying each day. I don't have any look-ahead required in my model at all. Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn't require me to be awake, nor have knowledge of the random data?

What are you trying to improve on LW, and why? What is the purpose of improvements? What do you want LW to be like after you'll apply them?

Personally, I'd want LW to be an effective tool in learning how to apply rationality, discussing rationality and rationality-related topics, and developing new rationality techniques. Every instrument has its purpose; if you want to study, say, math, isn't it more effective to go to some math communities and seek help and assistance there? If you want to chat like in Facebook, why not go to Facebook? If you have a bril... (read more)

0Adam Zerner
Good points, thanks for bringing them up. On second thought, I think I may have overestimated the value of things like study partners and project collaboration. Personally, I just really value being able to do these things with other LW users. Inferential distance is one big reason. I sense that I'm not alone here, but I'm not sure. In addition to these sorts of tangential things, I also would like to see the quality of posts and conversation on LW improve, and I think some of the other points I brought up would help address that. Hmm, I don't want to nitpick, but I'm struggling to answer this question because I don't know how to interpret words like "bottleneck" and "satisfied". I think that a more sophisticated UI could really transform how people think and communicate. As for how things are right now, I'd qualitatively describe it as "fine". Maybe "mildly satisfying".
estimator-20

I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.

Why do you think that your consciousness always moves to the branch where you live, but not at random? Quantum lotteries, quantum immortality and the like require not just MWI, but MWI with a bunch of additional assumptions. And if some QM interpretation flavor violates causality, it is more an argument against such an interpretation, than against causality.

The thing I don't like about such way of winnin... (read more)

3ike
Note that I said provided MWI is true. I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in. I don't see why; MWI doesn't violate locality. You have a point; my scenario is different from that, but I guess it isn't obvious. So let me restate my quantum suicide lottery in more detail. The general case I imagine is as follows: I go to sleep at time t. My computer checks some quantum data, and compares it to n. If it doesn't equal n, it kills me. Say I die at time t+dt in that case. If I don't die, it wakes me. So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.
estimator-20

It's just my impression; I don't claim that it is precise.

As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.

Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post's terminology), which I find unlikely. I'm 95% certain that if the AGI is going to be built anytime soon, it... (read more)

Load More