michael_mjd

Wiki Contributions

Comments

Sorted by

Awesome ideas! These ideas are some of the things missing for LLMs to have economic impact. Companies expected them to just automate certain jobs, but that's an all or nothing solution that's never worked historically (until it eventually does, but we're not there yet).

One idea I thought of when reading Scott Aaronson's Reading Burden (https://scottaaronson.blog/?p=8217), is that people with interesting opinions and with somewhat of a public presence, have a TON of reading to do, not just to keep up with current events, but to observe people's reactions and see the trend in ideas in response to events. Perhaps this can be improved with LLMs:

Give the model a collection of your writings and latest opinions. Have it scour online posts and their comments from your favorite sources. Each post + comments section is one input, so we need longer context. Look for opportunities to share your viewpoint. Report whether your viewpoint has already been shared or refuted, or if there are points not considered in your writings. If nothing, save yourself the effort! If something, highlight the important bits.

Might be too many LLM calls depending on the sources, obviously a retrieval stage is in order. Or that bit can be done manually, we seem pretty good at finding handfuls of interesting sounding articles, and do this anyway during procrastination.

I'll probably get disagree points, but wanted to share my reaction: I honestly don't mind the AI's output. I read it all and think it's just an elaboration of what you said. The only problem I noticed is it is too long.

Then again, I'm not an amazing writer, and my critical skills aren't so great for critiquing style. I will admit I rarely use assistance, because I have a tight set of points I want to include, and explaining them all to the AI is almost the same as writing the post itself.

Thanks for this post! I have always been annoyed when on Reddit or even here, the response to poverty always goes back to, "but poor people have cell phones!" It all comes down to freedom -- the amount of meaningfully distinct actions one person can take in the world to accomplish their goals. If there are few real alternatives, and one's best options all involve working until exhaustion, it is not true freedom.

I agree, the poverty restoring equilibrium is more complex than probably UBI -- maybe it's part of Moloch. I think the rents increasing by the UBI amount has something to do with demand inelasticity -- people will rent the same regardless of price -- so the price can rise until the breaking point once again. 

Nonetheless, UBI may still help. Also, I do think there are other concrete steps that can be taken. One cannot leave a horrible job for several reasons: (a) health insurance, (b) having a place to live, (c) having food, (d) school & giving children their best chance; but each of these can be tackled one by one. It may not solve the problem once and for all, but good quality public education (not funded by zip code), universal health insurance, and an adequate supply of housing, are all steps towards reducing the bottleneck imposed by one resource at a time. 

The bottom line in my personal philosophy is this -- take direct actions against those forces of poverty and Moloch. If there are unintended consequences, take direct action against them.  Propose policy, and try them out. Cynicism about any interventions working, is really wishful thinking by the wealthy elites. It's not coordinated, as you say. They want to believe the systems we have are really the best we can do, because what we have makes them powerful. Acknowledging the possibility that there is a better way would be uncomfortable for them both financially and psychologically!

Not my worst prediction, given the latest news!

That's fair. Here are some things to consider:

1 - I think 2017 was not that long ago. My hunch is that the low level architecture of the network itself is not a bottleneck yet. I'd lean on more training procedures and algorithms. I'd throw RLHF and MoE as significant developments, and those are even more recent.

2 - I give maybe 30% chance of a stall, in the case little commercial disruption comes of LLMs. I think there will still be enough research going on at the major labs, and even universities at a smaller scale gives a decent chance at efficiency gains and stuff the big labs can incorporate. Then again, if we agree that they won't build the power plant, that is also my main way of stalling the timeline 10 years. The reason I only put 30% is I'm expecting multi modalities and Aschenbrenner's "unhobblings" to get the industry a couple more years of chances to find profit.

I think it is plausible but not obvious if this is the case, that large language models have a fundamental issue with reasoning. However, I don't think this greatly impacts timelines. Here is my thinking:

I think time lines are fundamentally driven by scale and compute. We have a lot of smart people working on the problem, and there are a lot of obvious ways to address these limitations. Of course, given how research works, most of these ideas won't work, but I am skeptical of the idea that such a counter-intuitive paradigm shift is needed that nobody has even conceived of it yet. A delay of a couple of years is possible, perhaps if the current tech stack proves remarkably profitable and the funding goes directly into the current paradigm. But as compute becomes bigger and cheaper, all the more easy it will be to rapidly try new ideas and architectures.

I think our best path forward to delaying timelines is to not build gigawatt scale data centers.

Is there a post in the Sequences about when it is justifiable to not pursue going down a rabbit hole? It's a fairly general question, but the specific context is a tale as old as time. My brother, who has been an atheist for decades, moved to Utah. After 10 years, he now asserts that he was wrong and his "rigorous pursuit" of verifying with logic and his own eyes, leads him to believe the Bible is literally true. I worry about his mental health so I don't want to debate him, but felt like I should give some kind of justification for why I'm not personally embarking on a bible study. There's a potential subtext of, by not following his path, I am either not that rational, or lack integrity. The subtext may not really be there, but I figure if I can provide a well thought out response or summarize something from EY, it might make things feel more friendly, e.g. "I personally don't have enough evidence to justify spending the time on this, but I will keep an open mind if any new evidence comes up."

I would pay to see this live at a bar or one of those county fair (we had a GLaDOS cover band once so it's not out of the question)

If we don't get a song like that, take comfort that GLaDoS's songs from the Portal soundtrack are basically the same idea as the Sydney reference. Link: https://www.youtube.com/watch?v=dVVZaZ8yO6o

Let me know if I've missed something, but it seems to me the hard part is still defining harm. In the one case, where we will use the model and calculate the probability of harm, if it has goals, it may be incentivized to minimize that probability. In the case where we have separate auxiliary models whose goals are to actively look for harm, then we have a deceptively adversarial relationship between these. The optimizer can try to fool the harm finding LLMs. In fact, in the latter case, I'm imagining models which do a very good job at always finding some problem with a new approach, to the point where they become alarms which are largely ignored.

Using his interpretability guidelines, and also human sanity checking all models within the system, I see we can probably minimize failure modes that we already know about, but again, once it gets sufficiently powerful, it may find something no human has thought of yet.

Load More