All of Sysice's Comments + Replies

Sysice00

"active investment with an advisor is empirically superior to passive non-advised investment for most people." Can you source this?

0Alsadius
https://www.ific.ca/wp-content/uploads/2013/08/New-Evidence-on-the-Value-of-Financial-Advice-November-2012.pdf/1653/ In particular, the research paper provides new evidence that: 1. Advice has a positive and significant impact on financial assets aft er factoring out the influence of close to 50 socio-economic, demographic and attitudinal variables that also affect individual financial assets; 2. The positive effect of advice on wealth accumulation cannot be explained by asset performance alone: the greater savings discipline acquired through advice plays an important role; 3. Advice positively impacts retirement readiness, even after factoring out the impact of a myriad of other variables; and 4. Having advice is an important contributor to levels of trust, satisfaction and confidence in financial advisors—a strong indicator of value.
Sysice30

This isn't necessarily- if you have to think about using that link as charity while shopping, it could decrease your likelihood of doing other charitable things (which is why you should set up a redirect so you don't have to think about it, and you always use it every time!)

2faul_sname
Amazon already does that for you -- if you go to buy something without using that link, it'll ask you if you want to.
Sysice00

Does this stack with amazon smile? For example, how much money goes where when comparing this link to this one?

1Peter Wildeford
Yes, it does. Amazon Smile gives 0.5% and Shop for Charity / Slatestarcodex gives ~5%. You can stack the two to give 5.5%. (But you cannot stack Shop for Charity and Slatestarcodex.)
0tog
I don't actually know, and can't test it as they don't have Amazon Smile outside the US. However if you buy something using both and message me with what it was, I can check if we get commission on it.
Sysice300

It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.

If any XKCD readers are here: Welcome! I assume you've already googled what "Roko's Basilisk" is. For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

4JoshuaFox
That explanation by Eliezer cleared things up for me. He really should have explained himself earlier. I actually had some vague understanding of what Eliezer was doing with his deletion and refusal to discuss the topic, but as usual, Eliezer's explanation make things that I thought I sort-of-knew seem obvious in retrospect. And as Eliezer realizes, the attempt to hush things up was a mistake. Roko's post should have been taken as a teaching moment.
4maxikov
Exactly. Having the official position buried in comments with long chains of references doesn't help to sound convincing compared to a well-formatted (even if misleading) article.
9Punoxysm
Because of Eliezer's reaction, probably a hundred more people have heard of the Basilisk, and it tars LW's reputation. And this wasn't particularly unforseeable - see Streisand Effect. Part of rationality is about regarding one's actions as instrumental. He mucked that one up. But to be fair to him, it's because he takes these ideas very seriously. I don't care about the basilisk because I don't take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.
6XiXiDu
For a better idea of what's going on you should read all of his comments on the topic in chronological order.
4FiftyTwo
That response in /r/futurology is really good actually, I hadn't seen it before. Maybe it should be reposted (with the sarcasm slightly toned down) as a main article here? Also kudos to Eleizer for admitting he messed up with the original deletion.
6Azathoth123
I'm guessing Eliezer has one of those, probably locked away behind a triply-locked vault in the basement of MIRI.
Sysice50

You seem to be saying that people can't talk, think about, or discuss topics unless they're currently devoting their life towards that topic with maximum effectiveness. That seems... incredibly silly.
Your statements seem especially odd considering that there are people currently doing all of the things you mentioned (which is why you knew to mention them).

-5Shmi
Sysice390

Did the survey (a couple days ago).

I wasn't here for the last survey- are the results predominantly discussed here and on Yvain's blog?

3Adele_L
Yes, Yvain will write a post about the results here once it is finished. I think historically that has been around the start of the new year.
Sysice100

I find it very useful to have posts like these as an emotional counter to the echo chamber effect. Obviously this has little or no effect on the average LW reader's factual standpoint, but reminds us both of the heuristical absurdity of our ideas, and how much we have left to accomplish.

-1[anonymous]
I, for one, love that guy's blog.
2John_Maxwell
I don't think LW's ideas are heuristically absurd. If you look at the comments on the CNET article, people seem pretty evenly divided for and against. (Criticism is still very valuable though.)
Sysice10

True. I've always read things around that speed by default, though, so it's not related to speedreading techniques, and I don't know how to improve the average person's default speed.

2ChristianKl
"default" is a deceptive word. You probably didn't read at that speed when you where 10 years old. Somewhere along the lines you learned it. Given that you learned it and don't know how you learned it, there also no good reason to assume that you are at the maximum that's possible.
Sysice30

This matches my experience. Speed reading software like Textcelerator is nice when I want to go through a fluff story at 1200 WPM, but anything remotely technical requires me to be at 400-600 at most, and speedreading does not fundamentally affect this limit.

2ChristianKl
Reading technical material at 600 WPM would still be much faster than the average person.
Sysice20

HPMOR is an excellent choice.

What's your audience like? A book club (presumed interest in books, but not significantly higher maturity or interest in rationality than baseline), a group of potential LW readers, some average teenagers?

The Martian (Andy Weir) would be a good choice for a book-club-level group- very entertaining to read and promotes useful values. Definitely not of the "awareness raising" genre, though.

If you think a greater than average amount of them would be interested in rationality, I'd consider spending some time on Ted Chiang... (read more)

3Brendon_Wong
Thank you for your help! I have edited my post with additional information. My audience is a general youth audience, think of promoting content to an entire high school, with "average teenagers" and people that might be more interested in content. Of course, some people will be more interested than others, so a wide variety of recommendations for different interest groups is better. I'm primarily looking for books that promote ethical/altruistic behavior, I'm not sure if any of your beforementioned recommendations do so.
Sysice120

Giving What We Can recommends over 10% of income. I currently donate what I can spare when I don't need the money, and have precommitted to 50% of my post-tax income in the event that I acquire a job that pays over $30,000 a year (read: once I graduate college). The problem with that is that you already have a steady income and have arranged your life around it- it's much easier to not raise expenses in response to income than it is to lower expenses from a set income.

Like EStokes said, however, the important thing isn't to get caught up in how much you should be donating in order to meet some moral requirement. It's to actually give in a way that you, yourself, can give. We all do what we can :)

2Gunnar_Zarncke
Its good to give factual numerical values. But I looked up GWWCs explanation for the suspiciously round number of 10% and it is: So this 10% appears to be arbitrary from the point of view of the OPs question. There seems to be no ethical reasoning behind the 10%. At least it looks more like charity-optimization. This may sound harsh, but this is what it looks in this light.
Sysice80

How I interpreted the problem- it's not that identical agents have different utility functions, it's just that different things happen to them. In reality, what's behind the door is behind the door, while in the simulation rewards X with something else. X is only unaware of whether or not he's in a simulation before he presses the button- obviously once he actually receives the utility he can tell the difference. Although the fact that nobody else has stated this makes me unsure. OP, can you clarify a little bit more?

1lackofcheese
Yes, this is how I view the problem as well.
Sysice220

It's tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting "not sim" gets you a guaranteed 0.9 utility. Hitting "sim" gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers "sim" to "not-sim," and a CDT agent would choose sim, it appears that choosing "not-sim" gets you more expected utility.

Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.

1Chris_Leong
Thanks for the explanation. I had no idea what was actually going on here.
8gjm
Oh, I see. Nice. Preferring "not sim" in this case feels rather like falling victim to Simpson's paradox, but I'm not at all sure that's not just a mistake on my part.
Sysice140

Most of what I know about CEV comes from the 2004 Yudkowsky paper. Considering how many of his views have changed in similar timeframes, and how the paper states multiple times that CEV is a changing work in progress, this seems like a bad thing for my knowledge of the subject. Has there been any significant public changes since then, or are we still debating based on that paper?

Sysice10

I'm interested in your statement that "other people" have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I'm a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?

2leplen
How about human level AI? How about AI that is above human intelligence but not called "a superintelligence"? I feel like the general public is over-exposed to predictions of drastic apocalyptic change and phrasing is going to sway public opinion a lot, especially since they don't have the same set of rigorous definitions to fall back on that a group of experts does.
2KatjaGrace
Firstly, I only meant that 'other' people are probably only a few decades off from the predictions of AI people - note that AI people are much less optimistic than AGI people or futurists, with 20% or so predicting after this century. My contacts don't make me feel differently. I was actually only talking about the different groups in the MIRI dataset pictured above (as shown in the graph with four groups in earlier). Admittedly the 'other' group there is very small, so one can't infer that much from it. I agree your contacts may be a better source of data, if you know their opinions in an unbiased way. I also doubt the non-AGI AI group is as strongly selected for optimism about eventual AGI from among humans as AGI people are from among AI people. Then since the difference between AI people and AGI people is only a couple of decades at the median, I doubt the difference between AI researchers and other informed people is that much larger. It may be that people who make public comments at all tend to be a lot more optimistic than those who do not, though the relatively small apparent differences between surveys and public statements suggests not.
Sysice30

I don't disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it's of one that doesn't do what we actually want it to do, a failure to actually achieve friendliness.

Speaking of what we actually want, I want something more like what's hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you're talking about here. (Tell me if I misunderstood, of course.)

0HopefullyCreative
You actually hit the nail on the head in terms of understanding the AGI I was referencing. I thought about problems such as why would a firm researching crop engineering to solve world hunger bother with paying a full and very expensive staff? Wouldn't an AGI that not only crunches the numbers but manages mobile platforms for physical experimentation be more cost effective? The AGI would be smarter and run around the clock testing, postulating and experimenting. Researchers would quickly find themselves out of a job if the ideal AGI were born for this purpose. Of course if men took on artificial enhancements their own cognitive abilities could improve to compete. They could even potentially digitally network ideas or even manage mobile robotic platforms with their minds as well. It seems therefore that the best solution to the potential labor competition problems with AGI is to simply use the AGI to help or outright research methods of making men mentally and physically better.
Sysice00

...Which, of course, this post also accomplishes. On second thought, continue!

Sysice10

The answer is, as always, "it depends." Seriously , though- I time discount to an extent, and I don't want to stop totally. I prefer more happiness to less, and I don't want to stop. (I don't care about ending date, and I'm not sure why I would want to). If a trade off exists between starting date, quality, and duration of a good situation, I'll prefer one situation over the other based on my utility function. A better course of action would be to try and get more information about my utility function, rather than debating which value is more sacred than the rest.

0Sysice
...Which, of course, this post also accomplishes. On second thought, continue!
Sysice50

I've voted, but for sake of clear feedback- I just made my first donation ($100) to MIRI, directly as a result of both this thread and the donation-matching. This thread alone would not have been enough, but I would not have found out about the donation-matching without this thread. I had no negative feelings from having this thread in my recent posts list.

Consider this a positive pattern reinforced :)

3KnaveOfAllTrades
Awesome!
Sysice80

MMEU fails as a decision theory that we actually want for the same reason that laypeople's intuitions about AI fail- it's rare to have a proper understanding of how powerful the phrases "maximum" and "minimum" are. As a quick example, actually following MMEU means that a vacuum metastability event is the best thing that could possibly happen to the universe, because it removes the possibility of humanity being tortured for eternity. Add in the fact that it doesn't allow you to deal with infinitesimals correctly (e.g. Pascal's Wager should never fail to convince an MMEU agent), and I'm seriously confused as to the usefulness of this.