All of phdead's Comments + Replies

phdead32

I think its important to disambiguate searching for new problems and searching for new results.

  1. For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new conc
... (read more)
phdead54

I am honor bound to mention that we do use gravity to store energy - https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricity Big fan of the blog.

2gostaks
We also literally use large weights. The most promising proposals use existing infrastructure like train tracks and mineshafts, eg Gravity System Aids Storage in Unused Mine Shaft - News.
phdead32

Never thought of that particular issue, and I grant that I basically haven't thought at all about how this proposal could be abused by people trying to stymie any system they don't like. Yeah in retrospect using the GDPR in the TL;DR blurb was a pretty bad unforced error. I was more using it as evidence that such proposals can be passed. However, I think I didn't really justify why regulation is needed beyond "governments might want to do it, and consumers might want it", which you correctly point out is insufficient given the amount of regulatory cost these kinds of things inevitably bring. Need to figure out if this half baked idea merits more time in the oven...

phdead21

I think GDPR cookie regulation is bad because it forces users to make the choice, thus adding an obnoxious layer to using any website. The actual granular control to users I don't think is a problem? As I say towards the end, I don't think we should force users to choose upon using a website/app, but only allow for more granular control of what data will be used in what feeds.

7habryka
Supporting GDPR easily doubles the cost of many software projects and introduces unclear liability to a huge number of organizations who cannot afford that.  It's an incredible pain to basically every organization I know of, including in situations that you really wouldn't expect it to (one example I recently heard: "organization cannot integrate sensitive external complaints about attendees into the admission process of their events because complaints would constitute private information which they then would need to share with the attendees the complaints are about").
phdead40

I am a young bushy eyed first year PhD. I imagine if you knew how much of a child of summer I was you would sneer on sheer principle, and it would be justified. I have seen a lot of people expecting eternal summer, and this is why I predict a chilly fall. Not a full winter, but a slowdown as expectations come back to reality.

1Ilio
I wish I was wise enough at your age to post my gut feeling on internet so that I could better update later. Well, internet did not exist, but you got the idea. One question after gwern’s reformulation: do you agree that, in the past, technical progress in ML almost always came first (before fundamental understanding)? In other words, is the crux of your post that we should no longer hope for practical progress without truly understanding why what we do should work?
phdead10

The point I was trying to make is not that there weren't fundamental advances in the past. There were decades of advances in fundamentals that rocketed forward development at an unsustainable pace. The effect of this can be seen with sheer amount of computation that is being used for SOTA models. I don't forsee that same leap happening twice.

phdead210

The summary is spot on! I would add that the compute overhang was not just due to scaling, but also due to 30 years of Moore's law and NVidia starting to optimize their GPUs for DL workloads.

The rep range idea was to communicate that despite AlphaStar being much smaller than GPT as a model, the training costs of both were much closer due to the way AlphaStar was trained. Reading it now it does seem confusing.

I meant progress of research innovations. You are right though, from an application perspective the plethora of low hanging fruit will have a lot of positive effects on the world at large.

5Kaj_Sotala
I'm not certain if "the fundamentals remain largely unchanged" necessarily implies "the near future will be very disappointing to anyone extrapolating from the past few years", though. Yes, it's true that if the recent results didn't depend on improvements in fundamentals, then we can't use the recent results to extrapolate further progress in fundamentals.  But on the other hand, if the recent results didn't depend on fundamentals, then that implies that you can accomplish quite a lot without many improvements on fundamentals. This implies that if anyone managed just one advance on the fundamental side, then that could again allow for several years of continued improvement, and we wouldn't need to see lots of fundamental advances to see a lot of improvement. So while your argument reduces the probability of us seeing a lot of fundamental progress in the near future (making further impressive results less likely), it also implies that the amount of fundamental progress that is required is less than might otherwise be expected (making further impressive results more likely). 
phdead10

Out of curiosity, what is your reasoning behind believing that DL has enough momentum to reach AGI?

2Morpheus
Mostly abstract arguments that don't actually depend on DL in particular (or at least not to a strong degree). Eg. stupid evolution was able to do it with human brains. This spreadsheet is nice for playing with the implications for different models (couldn't find Ajeya's report this belongs to). Though I haven't taken the time to thoroughly think through this, because playing through resonable values gave distributions that seemed to broad to bother. The point I wanted to make is that you can believe that things are slowing down (I am more empathetic to the view where AI will not have a big/galactic impact until things are too late) and still be worried.
phdead20

My thoughts for each question:

  1. Depending on context, there are a few ways I would communicate this. Take the phrase "We are quiet here." Said to prospective tenants at an apartment complex, it is "communicating group norms." Said to a friend who is talking during a funeral, it is "enforcing group norms". Telling yourself you will do this before you sleep is "enforcing identity norms". You are sharing information, just local information about the group instead of global information about the world. All the examples given are information sharing.
  2. Believing in
... (read more)