cousin_it

https://vladimirslepnev.me

Wikitag Contributions

Comments

Sorted by

"Where there's muck, there's brass" comes to mind.

cousin_it*158

I think messing with scroll is one of these things that even when you think it's a good idea, it isn't.

I don't think there will be an age of abundance :-(

As for unemployment, it feels a bit weird that 1) everyone I know outside FAANG, including me, feels that finding jobs has become much harder 2) the statistics say today's unemployment rate is kinda low and unremarkable.

Economics says export orientation is, to first order, better than import substitution. Ricardian comparative advantage and all that. As for the rationalist community, I've been convinced for years that it took a major loss by doing AI research in isolation (and sometimes even in secrecy) which ended up going nowhere.

Indeed there's a natural dichotomy between import replacement focused groups, like cults or isolationist states, vs. export-focused groups, like firms or exporter states. But I'm not sure import replacement is the right direction. I think the export-oriented strategy works about as well in terms of self-government, brings more money and success, and leads to a kind of healthy openness to the world.

Yeah. I stumbled upon a similar idea a decade ago and it pretty much changed my life. When feeling something, just feel it, lean into it instead of away. A small discussion here.

cousin_it468

I think it's a good direction to move in. But I usually don't think of it as "trying to become a wizard" or some kind of self-improvement. When I do something, it's because I'm interested in the thing. Like making a video game because I had an idea for it, or reading an economics textbook because I was curious about economic questions. The challenge is maintaining a steady flow of such projects, I've found that in "steady state" I do about one per year, which isn't a lot. So maybe ambition would actually help? Idk.

Yeah, I had similar thoughts. And it's even funnier, the AI will not just refuse to solve these problems, but also stop us from creating other AIs to solve these problems.

cousin_itΩ560

My perspective (well, the one that came to me during this conversation) is indeed "I don't want to take cocaine -> human-level RL is not the full story". That our attachment to real world outcomes and reluctance to wirehead is due to evolution-level RL, not human-level. So I'm not quite saying all plans will fail; but I am indeed saying that plans relying only on RL within the agent itself will have wireheading as attractor, and it might be better to look at other plans.

It's just awfully delicate. If the agent is really dumb, it will enjoy watching videos of the button being pressed (after all, they cause the same sensory experiences as watching the actual button being pressed). Make the agent a bit smarter, because we want it to be useful, and it'll begin to care about the actual button being pressed. But add another increment of smart, overshoot just a little bit, and it'll start to realize that behind the button there's a wire, and the wire leads to the agent's own reward circuit and so on.

Can you engineer things just right, so the agent learns to care about just the right level of "realness"? I don't know, but I think in our case evolution took a different path. It did a bunch of learning by itself, and saddled us with the result: "you'll care about reality in this specific way". So maybe when we build artificial agents, we should also do a bunch of learning outside the agent to capture the "realness"? That's the point I was trying to make a couple comments ago, but maybe didn't phrase it well.

Maybe the individual conscious people level is already too low level.

Load More