Wiki Contributions

Comments

Popular with Silicon Valley VCs 16 years later: just maximize the rate of entropy creation🤦🏻‍♂️

We have a winner! laserfiche's entry is the best (and only, but that doesn't mean it's not good quality) submission, and they win $5K.

Code and demo will be posted soon.

Exactly. As for the cost issue, the code can be deployed as:

- Twitter bots (registered as such) so the deployer controls the cost


- A webpage that charges you a small payment (via crypto or credit card) to run 100 queries. Such websites can actually be generated by ChatGPT4 so it's an easy lift. Useful for people who truly want to learn or who want to get good arguments for online argumentation

- A webpage with captchas and reasonable rate limits to keep cost small 

 

In general yes, here no. My impression from reading LW is that many people suffer from a great deal of analysis paralysis and are taking too few chances, especially given that the default isn't looking great.

There is such a thing as doing a dumb thing because it feels like doing something (e.g. let's make AI Open!) but this ain't it. The consequences of this project are not going to be huge (talking to people) but you might get a nice little gradient read as to how helpful it is and iterate from there.

It should be possible to ask content owners for permission and get pretty far with that.

AFAIK what character.ai does is fine tuning, with their own language models, which aren't at parity with ChatGPT. Using a better language model will yield better answers but, MUCH MORE IMPORTANTLY, what I'm suggesting is NOT fine tuning.

What I'm suggesting gives you an answer that's closer to a summary of relevant bits of LW, Arbital, etc. The failure mode is much more likely to be that the answer is irrelevant or off the mark than it being at odds with prevalent viewpoints on this platform.

Think more interpolating over an FAQ, and less reproducing someone's cognition.

The US has around one traffic fatality per 100 million miles driven; if a human driver makes 100 decisions per mile

A human driver does not make 100 "life or death decisions" per mile. They make many more decisions, most of which can easily be corrected, if wrong, by another decision.

The statistic is misleading though in that it includes people who text, drunk drivers, tired drivers. The performance of a well rested human driver that's paying attention to the road is much, much higher than that. And that's really the bar that matters for self driving car, you don't want a car that is doing better than the average driver who - hey you never know - could be a drunk.

Fixing hardware failures in software is literally how quantum computing is supposed to work, and it's clearly not a silly idea.

Generally speaking, there's a lot of appeal to intuition here, but I don't find it convincing. This isn't good for Tokyo property prices? Well maybe, but how good of a heuristic is that when Mechagodzilla is on its way regardless.

In addition

  1. There aren't that many actors in the lead.
  2. Simple but key insights in AI (e.g doing backprop, using sensible weight initialisation) have been missed for decades.

If the right tail for the time to AGI by a single group can be long and there aren't that many groups, convincing one group to slow down / paying more attention to safety can have big effects.

How big of an effect? Years doesn't seem off the table. Eliezer suggests 6 months dismissively. But add a couple years here and a couple years there, and pretty soon you're talking about the possibility of real progress. It's obviously of little use if no research towards alignment is attempted in that period of course, but it's not nothing.

Load More