phdead

Wiki Contributions

Comments

Sorted by
phdead32

I think its important to disambiguate searching for new problems and searching for new results.

  1. For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new concrete information, information outside of academia is not worth scientists interests due to lack of trust / reputation
  2. When it comes to searching for new hypotheses / problems, an important criterion is how much you personally believe in your direction. You never practically pursue ideas with 10% probability: you ideally pursue ideas you think have a fifty percent probability but your peers believe have a 15% probability. (This assumes you have high risk tolerance like I do, and are okay with a lot of failure. Otherwise, do incremental research). For problem generation, varied sources of information are useful, but the belief must come intrinsically.
  3. When searching for interesting results to verify and replicate, its open season.

As a result, I think that ideas outside academia are not useful to researchers unless the researchers in question have a comparative advantage at synthesizing those ideas into good research inspiration.

As for nonideal reasons for ignoring results outside academia, I would more blame reviewers rather than vague "status concerns" and a general low appetite for risk tolerance despite working in an inherently risky profession of research.

phdead54

I am honor bound to mention that we do use gravity to store energy - https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricity Big fan of the blog.

phdead32

Never thought of that particular issue, and I grant that I basically haven't thought at all about how this proposal could be abused by people trying to stymie any system they don't like. Yeah in retrospect using the GDPR in the TL;DR blurb was a pretty bad unforced error. I was more using it as evidence that such proposals can be passed. However, I think I didn't really justify why regulation is needed beyond "governments might want to do it, and consumers might want it", which you correctly point out is insufficient given the amount of regulatory cost these kinds of things inevitably bring. Need to figure out if this half baked idea merits more time in the oven...

phdead21

I think GDPR cookie regulation is bad because it forces users to make the choice, thus adding an obnoxious layer to using any website. The actual granular control to users I don't think is a problem? As I say towards the end, I don't think we should force users to choose upon using a website/app, but only allow for more granular control of what data will be used in what feeds.

phdead40

I am a young bushy eyed first year PhD. I imagine if you knew how much of a child of summer I was you would sneer on sheer principle, and it would be justified. I have seen a lot of people expecting eternal summer, and this is why I predict a chilly fall. Not a full winter, but a slowdown as expectations come back to reality.

phdead10

The point I was trying to make is not that there weren't fundamental advances in the past. There were decades of advances in fundamentals that rocketed forward development at an unsustainable pace. The effect of this can be seen with sheer amount of computation that is being used for SOTA models. I don't forsee that same leap happening twice.

phdead210

The summary is spot on! I would add that the compute overhang was not just due to scaling, but also due to 30 years of Moore's law and NVidia starting to optimize their GPUs for DL workloads.

The rep range idea was to communicate that despite AlphaStar being much smaller than GPT as a model, the training costs of both were much closer due to the way AlphaStar was trained. Reading it now it does seem confusing.

I meant progress of research innovations. You are right though, from an application perspective the plethora of low hanging fruit will have a lot of positive effects on the world at large.

phdead10

Out of curiosity, what is your reasoning behind believing that DL has enough momentum to reach AGI?

phdead20

My thoughts for each question:

  1. Depending on context, there are a few ways I would communicate this. Take the phrase "We are quiet here." Said to prospective tenants at an apartment complex, it is "communicating group norms." Said to a friend who is talking during a funeral, it is "enforcing group norms". Telling yourself you will do this before you sleep is "enforcing identity norms". You are sharing information, just local information about the group instead of global information about the world. All the examples given are information sharing.
  2. Believing in an opinion can be a group norm, and this can be useful or harmful. For example, "We believe victims" may not be bulletproof life advice, but groups which have that group norm often are more useful to survivors of sexual assault than groups which try to figure out if they believe the story first.
  3. I think motivations are complex and impossible to fully vocalize. Often I don't realize why I really want or don't want to do something until after I've had the instinctual response. Its possible narratives that obscure the full depth but feel true are good temporary stand ins in these cases. 
  4. Communication and enforcement of group and identity norms happens everywhere all the time, but increases the more status is associated with the group or identity.