All of Vivificient's Comments + Replies

(one liner - for policy makers)

Within ten years, AI systems could be more dangerous than nuclear weapons.  The research required for this technology is heavily funded and virtually unregulated.

Thanks for this post. I've seen the term inadequecy before (mostly on your Facebook page, I think) but never had such a clear definition in mind.

There was one small thing that bothered me in this post without detracting from the main argument. In section IV, we provisionally accept the premise "grantmakers are driven by prestige, not expected value of research" for the sake of a toy example. I was happy to accept this for the sake of the example. However, in section V (the omelette example and related commentary about research after the ... (read more)

Certainly! Here it is: https://i.snag.gy/8QxDsF.jpg

On that page, it is fine at normal zoom, but the problem occurs when I zoom out to 80%, at which point the text is roughly the same size as here. So I guess it is something to do with how the font is rendering at that size. Whether it is something wrong with my computer or with the font I don't know.

1habryka
Well, I guess I am disappointed in Edward Tufte. This makes it more likely that we will move away from our current font setup, which makes me sad since I do really like how the font renders on all of my devices.

Here is what I am seeing:
https://snag.gy/tvGpdx.jpg

I am on Chrome on Windows 10. Experimentation shows that the effect only happens when the page zoom is at 100%... if I zoom in or out, the w goes back to normal.

1habryka
Wow, yeah. This is definitely broken. One more way you could help me out. Could you send me a screenshot of how this page looks? https://edwardtufte.github.io/tufte-css/ That's where a large part of our fonts and styles come from.

The comment font has a weird lowercase 'w'. It is larger than the surrounding letters. Now that I have noticed it, I can't stop being distracted by it.

1habryka
Hmm, I am not noticing anything. Could you post a link to a screenshot?

It is done. (The survey. By me.)

I have taken the survey, including the digit ratio question.

Since there was a box to be included in the SSC survey, I just a little bit disappointed there wasn't a question for favourite SSC post to go with the favourite LessWrong post question.

Making things happen with positive thinking requires magic. But myths about the health effects of microwaves or plastic bottles are dressed up to look like science as usual. The microwave thing is supposedly based on the effect of radiation on the DNA in your food or something -- nonsense, but to someone with little science literacy not necessarily distinguishable from talk about the information-theoretic definition of death.

I'm not sure that signing papers to have a team of scientists stand by and freeze your brain when you die is more boring than cooking your food without a microwave oven. I would guess that cryonics being "weird", "gross", and "unnatural" would be more relevant.

Upvoted for providing a clear counterexample to Yvain's assertion that people would find immortality to be "surely an outcome as desirable as any lottery jackpot".

This suggests that a partial explanation for the data is that "experienced rationalists" (high karma, long time in community) are more likely to find immortality desirable, and so more likely to sign up for cryonics despite having slightly lower faith in the technology itself.

Your conclusion is possible. But I'll admit I find it hard to believe that non-rationalists really lack the ability to take ideas seriously. The 1 = 2 example is a little silly, but I've known lots of not-very-rational people who take ideas seriously. For example, people who stopped using a microwave when they heard about an experiment supposedly showing that microwaved water kills plants. People who threw out all their plastic dishes after the media picked up a study about health dangers caused by plastics. People who spent a lot of time thinking pos... (read more)

1[comment deleted]
7Viliam_Bur
Your examples require magic, pseudoscience, conspiracy theories. Perhaps the advantage of rationalists is the ability to take boring ideas seriously. (Even immortality is boring when all you have to do is to buy a life insurance, sign a few papers and wait. And admit that it most likely will not work. And that if it will work, it will pretty much be the science as usual.)

IQ - I could hire excellent tutors to make myself more intelligent, though definitely only to a certain point. More to the point, I could hire smart people to think of good ideas for me. I'll concede that I couldn't buy the experience of thinking like someone smarter than myself.

emotional states - Hire some psychologists to figure out what experiences causes people to have them, then buy those experiences.

personal achievements - This one I'll give you; you can't buy achieving something for yourself.

honour - This is a very vague term to me.

5Lumifer
As far as I know you can't raise your IQ significantly by training. There is a lot of data about how much can you raise your SAT/GRE/LSAT/etc. scores by tutors and training and that amount is limited, plus most of the gain is test-specific and not properly a rise in g. That doesn't buy you IQ, that buys you solutions to problems. That's a different thing. Let's see how that works for achieving moksha (= becoming enlightened) :-D

I find it a fun game trying to think of things that money can't buy (but that it is possible for people to get in other ways). It's difficult to think of a lot of answers, especially allowing for strategies like hiring someone to train you to become the kind of person who gets x. The best answer I've been able to come up with is specific anything, such as the friendship of a specific person.

4Lumifer
IQ seems to be the one obvious thing. A variety of inner emotional states. A lot of personal achievements. Things like honor.

How hard did you find it to be to organize/run a meetup? How did that compare to what you expected?

1Axel
How hard it is depends on what kind of meetup you’re running, in may case it’s very easy. The Brussels group is more of a social gathering. We start of with a topic for the day but go on wild tangents/play board games and generally just have fun. The only things I ever needed to do as an organizer were: pick a topic for the meetup, post the meetup on the site, arrive on time, make new members feel welcome and manage the mailing list. When I started out I honestly didn't have any expectations on how hard it would be, I had no idea how they would turn and had decided to just run with whatever happened. Once the meetup had a core group of regulars some of them offered to help and I could delegate the stuff I’m not very good at (like the meetup posts on LW and coming up with topics) These days the only things I feel I have to do are put in an extra effort to involve new members and keep the atmosphere friendly (which, in two years of meetups, has only once been a problem, LW’ers are generally great people) and those are things I would do anyway. I know there are other meetups were the organizer has more responsibility. For example, if you have a system where every month another person gives a short presentation you have to manage that as well. For larger groups (Brussels rarely has more then 4 people) an official moderator type person might be handy to make sure quieter people get a chance to speak up. There is no one “right” way to run a meetup, see why people enjoy coming to yours and try and make that part as awesome as you can. Just keep an open mind about trying new things every now and then. In short, how hard it is to run a meetup depends the type (social, exercise focused, presentations, etc.) In my case, it’s very easy especially since I have other helping me out. If you’re thinking on starting one yourself don’t worry to much about what type you want it to be, just see how the first few meeting go and it'll point itself out from there.

See this wiki page for links to discussion of Free Will in the sequences: http://wiki.lesswrong.com/wiki/Free_will

I'm not sure if I understand what you're suggesting. As I understand it, the argument isn't that Walmart is literally getting subsidies. It's just that Walmart employees are getting welfare, so Walmart doesn't have to pay to support them, reducing Walmart's costs hypothetically compared to an equivalent company which paid their workers a better wage.

So if you created a company which provided as good of working conditions as possible, your employees wouldn't need welfare, so you wouldn't be benefiting from the "subsidies". Also, your costs would go up, so you'd be more likely to go out of business than Walmart.

-3Viliam_Bur
I am suggesting to give people exactly the same money that Walmart is giving them (so the company benefits from the subsidies). But treat them well, and actually make them spend an hour or more each working day getting better education (during working hours), or something similar that will improve their lives in long term. Such an option would be a strict improvement against Walmart. Those who have better options available are simply not our target group. We are trying to improve the lives of those who currently work cheaply for Walmart.

I don't have a full strategy, but I have an idea for a data-gathering experiment:

I hand you a coin and try to get you to put it in the box for me. If you refuse, I update in the direction of the box harming people who put coins in it. If you comply, I watch and see what happens.

0David_Chapman
Excellent! This is very much pointing in the direction of what I consider the correct general approach. I hadn't thought of what you suggest specifically, but it's an instance of the general category I had in mind.

I have never posted on LW before, but this seems like a fine first time to do so.

I am really very curious to see the results of the real world cooperate/defect choice at the bottom of the test.