All of gwd's Comments + Replies

gwd10

FWIW I normally eat dinner around 6, go to bed 5 hours later at 11pm, and eat my next meal 8.5 hours later at 7:30am; at which point "break-fast" is certainly the right word, since I haven't eaten for 13 hours.  Contrast to breakfast, which only has to last me 5 hours (until lunch at 12:30pm), and lunch which again only has to last me 5.5 hours (until 6pm).

gwd73

People say that meta-analyses can weed out whatever statistical vagaries there may be from individual studies; but looking of that graph of the meta-study of saturated fat, I'm just not convinced of that at all.  Like, relative risk of CVD events suddenly goes from 0.2 to 0.8 at a threshold of 9%, and then just stays there?  Relative risk of stroke goes from 0.6 at 9% to 0.9 at 12% and then down to 0.5 at 13%? Does that say to you, "more saturated fat is bad", or "there's a statistical anomaly causing this jump"?

gwd30

The "purpose" of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art. 

Not only skill level, but usually physical capability level (as proxied by weight and sex) as well.  As an aside, although I'm not at all knowledgeable about martial arts or MMA, it always seemed like an interesting thing to do might to use some sort of an ELO system for fighting as well: a really good lightweight might end up fighting a mediocre heavyweight, and the overall winner for a year might be t... (read more)

gwd30

I like the MVP!  One comment re the idea of this becoming a larger thing in journalism, in relation to Goodhart's Law ("Once a measure becomes a target, it ceases to be useful as a measure"):

  • Affecting policy and public opinion is a "target"
  • "Real" journalism affects both public opinion and policy, and thus is a "proxy target"
  • If "real" journalism started being affected by prediction markets, then prediction markets would also become a proxy target
  • This would destroy their usefulness as measures

For example, even now, how much of the "85% chance Russia gai... (read more)

2vandemonian
I've heard stories (unverified) that hedge funds manipulated political betting markets ahead of Brexit. Whatever loses you made betting would be more than offset by even a tiny shift in govt bonds or whatever. If something like this happened, and the inherent correction mechanisms of a market were insufficient, I would probably just focus on the forecasts of top users.
4Garrett Baker
For prediction markets, I'm fine if they're consistently inaccurate because I, knowing their inaccuracies, would gain a bunch of money. But because there are smarter people than me who value money more than me, I expect those people will eat up the relevant money (unless the prediction market has a upper limit on how much a single person can bet like PredictIt). This is probably more of a problem for things like GJ Open or Metaculus, since their forecasts rely a bunch on crowd aggregations, so either they'd need to change the algorithms which report their publicly accessible forecasts, or in fact be less accurate. In general, I think if NYT starts reporting on (say) Manifold Markets markets, I expect those markets to get a shit ton more accurate, even if NYT readers are tremendously biased.
gwd20

I was chatting with a friend of mine who works in the AI space.  He said that the big thing that got them to GPT-4 was the data set; which was basically the entire internet.  But now that they've given it the entire internet, there's no easy way for them to go further along that axis;; that the next big increase in capabilities would require a significantly different direction than "more text / more parameters / more compute".

6awg
I'd have to disagree with this assessment. Ilya Sutskever recently said that they've not run out of data yet. They might some day, but not yet. And Epoch projects high-quality text data to run out in 2024, with all text data running out in 2040.
gwd30

Thanks for these, I'll take a look.  After your challenge, I tried to think of where my impression came from.  I've had a number of conversations with relatives on Facebook (including my aunt, who is in her 60's) about whether GPT "knows" things; but it turns out so far I've only had one conversation about the potential of an AI apocalypse (with my sister, who started programming 5 years ago).  So I'll reduce confidence in my assessment re what "people on the street" think, and try to look for more information.

Re HackerNews -- one of the tri... (read more)

gwd30

Can you give a reference?  A quick Google search didn't turn anything like that up.

7hairyfigment
Here's some more: https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/
2hairyfigment
This may be what I was thinking of, though the data is more ambiguous or self-contradictory: https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe
6hairyfigment
I'll look for the one that asked about the threat to humanity, and broke down responses by race and gender. In the meantime, here's a poll showing general unease and bipartisan willingness to legally restrict the use of AI: https://web.archive.org/web/20180109060531/http://www.pewinternet.org/2017/10/04/automation-in-everyday-life/ Plus: I do note, on the other side, that the general public seems more willing to go Penrose, sometimes expressing or implying a belief in quantum consciousness unprompted. That part is just my own impression.
gwd94

To me it's an attempt at the simple, obvious strategy of telling people ~all the truth he can about a subject they care a lot about and where he and they have common interests.  This doesn't seem like an attempt to be clever or explore high-variance tails.  More like an attempt to explore the obvious strategy, or to follow the obvious bits of common-sense ethics, now that lots of allegedly clever 4-dimensional chess has turned out stupid.

But it does risk giving up something.  Even the average tech person on a forum like Hacker News still thi... (read more)

The average person on the street is even further away from this I think.

This contradicts the existing polls, which appear to say that everyone outside of your subculture is much more concerned about AGI killing everyone. It looks like if it came to a vote, delaying AGI in some vague way would win by a landslide, and even Eliezer's proposal might win easily.

1Qumeric
I second this. I think people really get used to discussing things in their research labs or in specific online communities. And then, when they try to interact with the real world and even do politics, they kind of forget how different the real world is. Simply telling people ~all the truth may work well in some settings (although it's far from all that matters in any setting) but almost never works well in politics. Sad but true.  I think that Eliezer (and many others including myself!) may be suspectable to "living in the should-universe" (as named by Eliezer himself). I do not necessarily say that this particular TIME article was a bad idea, but I am feeling that people who communicate about x-risk are on average biased in this way.  And it may greatly hinder the results of communication.   I also mostly agree with "people don't take AI alignment seriously because we haven't actually seen anything all that scary yet". However, I think that the scary thing is not necessarily "simulated murders". For example, a lot of people are quite concerned about unemployment caused by AI. I believe it might change perception significantly if it will actually turn out to be a big problem which seems plausible.  Yes, of course, it is a completely different issue. But on an emotional level, it will be similar (AI == bad stuff happening). 

"For instance, personally I think the reason so few people take AI alignment seriously is that we haven't actually seen anything all that scary yet. "

And if this "actually scary" thing happens, people will know that Yudkowsky wrote the article beforehand, and they will know who the people are that mocked it.

gwd10

Sorry -- that was my first post on this forum, and I couldn't figure out the editor.  I didn't actually click "submit", but accidentally hit a key combo that it interpreted as "submit".

 I've edited it now with what I was trying to get at in the first place.

gwd94
  • People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
  • If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.

Part of the problem with these two is that whether an apocalypse happens or not often depends on whether people took the risk of it happening seriously.  We absolutely, could have had a nuclear holocaust in the 70's and 80's; one of the reasons we didn't is b... (read more)

1Noosphere89
What, exactly is this comment intended to say?