All of YVLIAZ's Comments + Replies

YVLIAZ00

I just want to point out some nuiances.

1) The divide between your so called "old CS" and "new CS" is more of a divide (or perhaps a continuum) between engineers and theorists. The former is concerned with on-the-ground systems, where quadratic time algorithms are costly and statistics is the better weapon at dealing with real world complexities. The latter is concerned with abstracted models where polynomial time is good enough and logical deduction is the only tool. These models will probably never be applied literally by engineers, bu... (read more)

YVLIAZ00

There is difference between "having an idea" and "solid theoretical foundations". Chemists before quantum mechanics had a lots of ideas. But they didn't have a solid theoretical foundation.

That's a bad example. You are essentially asking researchers to predict what they will discover 50 years down the road. A more appropriate example is a person thinking he has medical expertise after reading bodybuilding and nutrition blogs on the internet, vs a person who has gone through medical school and is an MD.

0Squark
I'm not asking researchers to predict what they will discover. There are different mindsets of research. One mindset is looking for heuristics that maximize short term progress on problems of direct practical relevance. Another mindset is looking for a rigorously defined overarching theory. MIRI is using the latter mindset while most other AI researchers are much closer to the former mindset.
YVLIAZ00

I think you are overhyping the PAC model. It surely is an important foundation for probabilistic guarantees in machine learning, but there are some serious limitations when you want to use it to constrain something like an AGI:

  1. It only deals with supervised learning

  2. Simple things like finite automata are not learnable, but in practice it seems like humans pick them up fairly easily.

  3. It doesn't deal with temporal aspects of learning.

However, there are some modification of the PAC model that can ameliorate these problems, like learning with membership ... (read more)

0[anonymous]
I would definitely call it an open research problem to provide PAC-style bounds for more complicated hypothesis spaces and learning settings. But that doesn't mean it's impossible or un-doable, just that it's an open research problem. I want a limitary theorem proved before I go calling things impossible.
YVLIAZ70

I would definitely recommend learning basics of algorithms, feasibility (P vs NP), or even computability (halting problem, Godel's incompleteness, etc). They will change your worldview significantly.

CLRS is a good entry point. After that, perhaps Sipser for some more depth.

0redlizard
Seconded. P versus NP is the most important piece of the basic math of computer science, and a basic notion of algorithms is a bonus. The related broader theory which nonetheless still counts as basic math is algorithmic complexity and the notion of computability.
YVLIAZ00

Yes, so the exact definition of "have-to" and "want-to" already present some difficulties in pinpointing what exact the theory says.

In my personal experience, it's not so much "fear" than fatigue and frustration. I also don't feel that my desire to read reduces; it stays intense, but my brain just can't keep absorbing information, and I find myself keep rereading the same passages because I can't wrap my head around them.

YVLIAZ80

I can see this theory working in several scenarios, despite (or perhaps rather because of) the relative fuzziness of its description (which is of course the norm in psychological theories so far). However I have personal experiences that at least at face value don't seem to be able to be explained by this theory:

During my breaks I would read textbooks, mostly mathematics and logic, but also branching into biology/neuroscience, etc. I would begin with pleasure, but if I read the same book for too long (several days) my reading speed slows down and I start f... (read more)

It feels like this model would be worth combining with Kurzban et al's model, which posits that as we continue working on some task for an extended time, our brain's estimate of the marginal benefit of continuing to work on this task gradually declines, making it more likely that we will switch to doing something else.

If we furthermore combine things with this model, which posits that the amount of interest that one has for some domain is relative to one's sensitivity to feedback in that domain, then that might be a step towards figuring out why exactly s... (read more)

2savageorange
I do ('have-to' and 'want-to' are dynamically redefined things for a person, not statically defined things). I regard excessive repetition as dangerous*.. even on a subconscious level. So as I get into greater # of repetitions, I feel greater and greater unease, and it's an increasing struggle to keep my focus in the face of my fear. So my 'want-to' either reduces or is muted by fear. If you do not have this type of experience, obviously this does not apply. * Burn out and overhabituation/compulsive behaviours being two notable possibilties.
YVLIAZ00

I bought these with a 4 socket adapter. However, I think my lamp can't power them all. Does anyone know a higher output lamp?

Actually I'm not even sure if that is how lights work. If someone can explain how I can the power that goes to the light bulbs, it'd be greatly appreciated.

0obfuscate
Have you tried using normal bulbs of lower wattage in the 4 socket adapter? (Honestly, the lower wattage shouldn't make a difference, but just in case it does...)
YVLIAZ10

I'm going out on a limb on this one, but since the whole universe includes separate branching “worlds”, and over time this means we have more worlds now than 1 second ago, and since the worlds can interact with each other, how does this not violate conservation of mass and energy?

The "number" of worlds increases, but each world is weighted by a complex number, such that when you add up all the squares of the complex numbers they sum up to 1. This effectively preserves mass and energy across all worlds, inside the universal wave function.

0Shmi
Even if this were true, conservation of energy across the worlds is not a good argument for or against MWI. There is no reason it should be conserved over non-interacting entities. Also note that energy in our expanding Universe is not conserved (or even well-defined) globally.
YVLIAZ30

In contrast with the title, you did not show that the MWI is falsifiable nor testable.

I agree that he didn't show testable, but rather the possibility of it (and the formalization of it).

You just showed that MWI is "better" according to your "goodness" index, but that index is not so good

There's a problem with choosing the language for Solomonoff/MML, so the index's goodness can be debated. However, I think in general index is sound.

You calculate the probability of a theory and use this as an index of the "truthness"

... (read more)
1V_V
When I hear about Solomonoff Induction, I reach for my gun :) The point is that you can't use Solomonoff Induction or MML to discriminate between interpretations of quantum mechanics: these are formal frameworks for inductive inference, but they are underspecified and, in the case of Solomonoff Induction, uncomputable. Yudkowsky and other people here seem to use the terms informally, which is an usage I object to: it's just a fancy way of saying Occam's razor, and it's an attempt to make their arguments more compelling that they actually are by dressing them in pseudomathematics. That assumes that Solomonoff Induction is the ideal way of performing inductive reasoning, which is debateable. But even assuming that, and ignoring the fact that Solomonoff Induction is underspecified, there is still a fundamental problem: The hypotheses considered by Solomonoff Induction are probability distributions on computer programs that generate observations, how do you map them to interpretations of quantum mechanics? What program corresponds to Everett's interpretation? What programs correspond to Copenhagen, objective collapse, hidden variable, etc.? Unless you can answer these questions, any reference to Solomonoff Induction in a discussion about interpretations of quantum mechanics is a red herring. Actually Copenhagen doesn't commit to collapse being objective. People here seem to conflate Copenhagen with objective collapse, which is a popular misconception. Objective collapse intepretations generally predict deviations from standard quantum mechanics in some extreme cases, hence they are in principle testable.
3Mitchell_Porter
Because it sets people up to think that QM can be understood in terms of wavefunctions that exist and contain parallel realities; yet when the time comes to calculate anything, you have to go back to Copenhagen and employ the Born rule. Also, real physics is about operator algebras of observables. Again, this is something you don't get from pure Schrodinger dynamics. QM should be taught in the Copenhagen framework, and then there should be some review of proposed ontologies and their problems.