All of kgalias's Comments + Replies

When was the last data migration from LW 1.0? I'm getting an "Invalid email" message, even though I have a linked email here.

3Vaniver
Late May, if I recall correctly. We'll be able to merge accounts if you made it more recently or there was some trouble with the import.

For me it just returns "invalid email", though I can see my email in http://lesswrong.com/prefs/update/.

Regarding your last point: is a hellish world preferable to an empty one?

0chaosmage
Yes, because it has more potential for improvement. The Earth of a million years ago, where every single animal was fighting for its life in an existence of pain and hunger, was more hellish than the present one, where at least a percent or so are comparatively secure. So that's an existence proof of hellishness going away. Emptiness doesn't go away. Empty worlds evidently tend to stay empty. We now see enough of them well enough to know that.

Does anyone know if and where can I find "IB Mathematics Standard Level Course Book: Oxford IB Diploma Programme" (I need this one specifically)?

https://global.oup.com/education/product/9780198390114/?region=uk

kgalias30

Thanks! This will be helpful.

kgalias20

I don't have time to evaluate which view is less wrong.

Still, I was somewhat surprised when I saw your first comment.

1skeptical_lurker
Upvoted for not wasting time!
kgalias40

Is this what you have in mind?

Sugar does not cause hyperactivity in children.[230][231] Double-blind trials have shown no difference in behavior between children given sugar-full or sugar-free diets, even in studies specifically looking at children with attention-deficit/hyperactivity disorder or those considered sensitive to sugar.[232]

wikipedia

5skeptical_lurker
No, I have this in mind: http://www.ncbi.nlm.nih.gov/pubmed/17224202
kgalias10

Sugar alone makes it more difficult to concentrate for many people, as well as having many other deleterious effects.

What do you mean?

1skeptical_lurker
I mean, if you are oscillating between sugar highs and crashes, it is difficult to concentrate, plus it causes diabetes etc..
kgalias00

Sorry for the pause, internet problems at my place.

Anyways, it seems you're right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it'll take more time than emulation (on average).

kgalias10

I agree.

Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?

1[anonymous]
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It's just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done. EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don't know precisely what has to be done, so we can't quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
kgalias30

Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?

1[anonymous]
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today's desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result -- it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a "clever insight" problem, or perhaps more accurately a large series of clever insights.
kgalias20

How is theoretical progress different from engineering progress?

Is the following an example of valid inference?

We haven't solved many related (and seemingly easier) (sub)problems, so the Riemann Hypothesis is unlikely to be proven in the next couple of years.

In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.

1[anonymous]
Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won't ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.
kgalias40

Hello! My name is Christopher Galias and I'm currently studying mathematics in Warsaw.

I figured that using a reading group would be helpful in combating procrastination. Thank you for doing this.

kgalias20

This is the part of this section I find least convincing.

2[anonymous]
Can you elaborate?
kgalias10

To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?

Yes, that was my (tentative) claim.

We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.

kgalias40

Can't we use a hierarchy of ordinal numbers and a different ordinal sum (e.g. maybe something of Conway's) in our utility calculations?

That is, lying would be infinitely bad, but lying ten times would be infinitely worse.

kgalias40

OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.

I'd rather compare it to some other technological topic, but which doesn't have a relevant franchise in popular culture.

4KatjaGrace
To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life? Some other technological topics that hadn't happened in real life when people became concerned about them: * Nuclear weapons, had The World Set Free, though I'm not sure how well known it was (may have been seen as frivolous by most at first - I'm not sure, but by the time there were serious projects to build them I think not) * Extreme effects from climate change, e.g. massive sea level rise, freezing of Northern Europe, no particular popular culture franchise (not very frivolous) * Recombinant DNA technology, the public's concern was somewhat motivated by The Andromeda Strain) (not frivolous I think). Evidence seems mixed.
kgalias20

As a possible failure of rationality (curiosity?) on my part, this week's topic doesn't really seem that interesting.

kgalias10

What topic are you comparing it with?

When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?

1KatjaGrace
War is taken fairly seriously in reporting, though there are a wide variety of war-related movies in different styles.
kgalias10

No need to apologize - thank you for your summary and questions.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

No disagreement here.

kgalias20

I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.

How does the Flynn effect affect our belief in the hypothesis of accumulation?

3gallabytes
It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.
kgalias20

It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.

Do you think this is a sensible view?

1gallabytes
Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence. That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.
kgalias40

The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

1Paul Crowley
That's a tricky problem! If we assume people are doing this in their spare time, then a weekend is the best time to do it: say noon Pacific time, which is 9pm Berlin time. But people might want to be doing something else with their Saturdays or Sundays. If they're doing it with their weekday evenings, then they just don't overlap; the best you can probably do is post at 10am Pacific time on (say) a Monday, and let Europe and UK comment first, then the East Coast, and finally the West Coast. Obviously there will be participants in other timezones, but those four will probably cover most participants.
kgalias90

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

1KatjaGrace
Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth. Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
3NxGenSentience
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention? Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.) Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...") Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money. If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation. ---------------------------------------- Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered. But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.
2lukeprog
Agree.
kgalias00

There's a small chance I might be there - if not, see you next time!

kgalias00

I would be interested, but I'd prefer the day before or so.

kgalias10

Somewhat relevant: http://golem.ph.utexas.edu/category/2007/05/linear_algebra_done_right.html

I've also seen this book described as "one of those texts that feels like a piece of category theory even though it’s not actually about categories", which is high praise.

kgalias00

The cost here might be someone implementing a technical solution.

kgalias00

Are minor nuisances never worth solving?

0Gunnar_Zarncke
Not it the cost exceeds the benefits.
kgalias20

I understand. Nevertheless, discussion so far hasn't gotten anywhere. Perhaps downvoting meetup threads would put some pressure on people involved in meetups to resolve the matter.

As of now, I haven't downvoted any meetup-related thread.

rocurley120

I'm the guy who posts the DC meetups. While I'm sympathetic to the problem, I'm not sure what I can do to help, aside from not posting meetups at all (not really an option). Pressuring me won't help you if I can't do anything.

kgalias100

Is it OK for me to downvote meetup threads if I don't want to see them?

0Gunnar_Zarncke
I understand that once some dissatisfaction with some minor nuisance (and a minor nuisance the meetups notices are given that you can scroll them away with the flick of a finger) can cause your brain to get into a negative feedback loop where the dissatisfaction gets moved around and increased as long it is not solved (see also http://lesswrong.com/lw/21b/ugh_fields/). But see thru this. It is a minor nuisance. You are above this. Dont let your dissatisfaction fool you. Yedi mind trick: There is no prblem with meetups. Scroll on.

I don't know how other meetups go, but my local meetup is based on the fact that members of the group volunteer to lead the meetup. (on a week by week basis) The person who volunteers puts in some extra amount of their time to ensure that there is a good topic. These people keep the meetups going, and are doing a service for the rationality community.

These people should not be punished with negative karma. If anything, we should be awarding karma for those people who make meetup posts.

Your complaint is about the fact that there is no separate list of meetups and non-meetup posts, and by down voting meetup posts, you are punishing innocent volunteers.

-7lmm
3James_Miller
A core long-term goal of LessWrong is to build a rationalist community so a necessary condition for a downvote should be that a post doesn't advance this goal.
philh160

I think not, unless there are only very specific meetup threads that you don't want to see. E.g. ones with no location in the title.

Any individual meetup thread is very valuable for a small number of people, and indifferent-to-mildly-costly to a large number of people. Votes allow you to express a preference direction but not magnitude, which doesn't actually capture preferences in this case.

3[anonymous]
Downvoting, by itself, isn't going to stop anyone from posting meetup threads. That said, there has been discussion/complaints about meetup spam before, so you're not alone. edit: clarify wording
kgalias00

Thanks for the piece of counter-data!

I might look into the book, but the naming convention is a big turnoff.

kgalias00

I already mentioned what Halmos' stance was. What I'm more interested in is how is it possible to work without examples.

1Stabilizer
The point I was trying to make is that it may not be necessary to have "a large stack of examples". It might instead be much more useful to have a couple of "protoypal concrete examples...a root example". Kontsevich seems to have similar thought patterns.
kgalias00

That seems somewhat surprising coming from Gowers.

[This comment is no longer endorsed by its author]Reply
kgalias50

No, of course not, but it still might make sense to wonder why it's so.

3ESRogs
Yeah, fair point.
kgalias10

Whereas I can (somewhat) make sense of thinking with examples, it seems hard to describe just what exactly does it mean to think with general abstract concepts.

kgalias00

Can you provide some more background? What is a morphism of computations?

0badtheatre
Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism": We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.
kgalias30

On the other hand, allowing any invertible function to be a _morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers.

I don't understand why this is a counterexample.

0badtheatre
Neither do I, but my intuition suggests that a static copy of a brain/the software necessary to emulate it plus a counter wouldn't cause that brain to experience consciousness, whereas actually running the simulation as a reversible computation would...
kgalias30

What fanfics should I read (perhaps as a HPMOR substitute)?

8MathiasZaman
There's a new subreddit dedicated to rationalist fiction. You can check out stories linked there. I'm currently reading Rationalising Death, a Death Note fanfic and it's pretty good even though I haven't seen the anime on which it's based. I'm also one-thirds into Amends, or Truth and Reconciliation, which is a decent look at how Harry Potter characters would logically react to the end of the Second Wizarding War. So far no idiot balls and pretty good characterization.
9tgb
If you haven't yet taken EY's suggestion in the author's notes to read Worm yet, do so. It's original fiction, but you probably don't mind. Edit: also this might belong in the media thread?
Manfred160

Harry Potter and the Natural 20.

Object level response To the Stars. Meta level, check the monthly media thread archives and/or HPMOR's author notes. They have lots of good suggestions, and in depth reviews.

4Alsadius
I quite enjoyed https://www.fanfiction.net/s/2857962/1/Browncoat-Green-Eyes (Yes, it's a Harry Potter/Firefly crossover. It's much, much better than the premise makes it sound)
kgalias100

Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.

I find this experience common and I'm sure most working mathematicians (as opposed to merely a student) would confirm. One of the most important things is not getting discouraged in the face of total incomprehensibility.

kgalias20

That doesn't seem to be relevant, as krav maga exactly teaches you things like targeting the throat (or groin).

kgalias90

Luke has said it will be in a different ebook.

Load More