AnthonyC

Wikitag Contributions

Comments

Sorted by

I agree that consciousness arises from normal physics and biology, there's nothing extra needed, even if I don't yet know how. I expect that we will, in time, be able to figure out the mechanistic explanation for the how. But right now, this model very effectively solves the Easy Problem, while essentially declaring the Hard Problem not important. The question of, "Yes, but why that particular qualia-laden engineered solution?" is still there, unexplained and ignored. I'm not even saying that's a tactical mistake! Sometimes ignoring a problem we're not yet equipped to address is the best way to make progress towards getting the tools to eventually address it. What I am saying is that calling this a "debunking" is misdirection.

I've read this story before, including and originally here on LW, but for some reason this time it got me thinking: I've never seen a discussion about what this tradition meant for early Christianity, before the Christians decided to just declare (supposedly after God sent Peter a vision, an argument that only works by assuming the conclusion) that the old laws no longer applied to them? After all, the Rabbi Yeshua ben Joseph (as the Gospels sometimes called him) explicitly declared the miracles he performed to be a necessary reason for why not believing in him was a sin.

We apply different standards of behavior for different types of choices all the time (in terms of how much effort to put into the decision process), mostly successfully. So I read this reply as something like, "Which category of 'How high a standard should I use?' do you put 'Should I lie right now?' in?"

A good starting point might be: One rank higher than you would for not lying, see how it goes and adjust over time. If I tried to make an effort-ranking of all the kinds of tasks I regularly engage in, I expect there would be natural clusters I can roughly draw an axis through. E.g. I put more effort into client-facing or boss-facing tasks at work than I do into casual conversations with random strangers. I put more effort into setting the table and washing dishes and plating food for holidays than for a random Tuesday. Those are probably more than one rank apart, but for any given situation, I think the bar for lying should be somewhere in the vicinity of that size gap.

One of the factors to consider, that contrasts with old-fashioned hostage exchanges as described, is that you would never allow your nation's leaders to visit any city that you knew had such an arrangement. Not as a group, and probably not individually. You could never justify doing this kind of agreement for Washington DC or Beijing or Moscow, in the way that you can justify, "We both have missiles that can hit anywhere, including your capital city." The traditional approach is to make yourself vulnerable enough to credibly signal unwillingness to betray one another, but only enough that there is still a price at which you would make the sacrifice.

Also, consider that compared to the MAD strategy of having launchable missiles, this strategy selectively disincentivizes people from wanting to move to whatever cities were the subject of such agreements, which were probably your most productive and important cities.

It’s a subtle thing. I don’t know if I can eyeball two inches of height.

Not from a picture, but IRL, if you're 5'11" and they claim 6'0", you can. If you're 5'4", probably not so much. Which is good, in a sense, since the practical impact of this brand of lying on someone who is 5'4" is very small, whereas unusually tall women may care whether their partner is taller or shorter than they are. 

This makes me wonder what the pattern looks like for gay men, and whether their reactions to it and feelings about it are different than straight women.

Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action. 

How do you propose to approximately carry out such a process, and how much effort do you put into pretending to do the calculation?

I'm not as much a stickler/purist/believer in honest-as-always-good as many around here, I think there are many times that deception of some sort is a valid, good, or even morally required choice. I definitely think e.g. Kant was wrong about honesty as a maxim, even within his own framework. But, in practice, I think your proposed policy sets much too low a standard, and in practice the gap between what you proposed vs "Lie by default whenever it passes an Expected Value Calculation to do so, just as for any other action," is enormous in both the theoretical defensibility, and in the skillfulness (and internal levels of honesty and self-awareness) required to successfully execute it.

I personally wouldn't want to do a PhD that didn't achieve this!


Agreed. It was somewhere around reason #4 I quit my PhD program as soon as I qualified for a masters in passing.

Any such question has to account for the uncertainty about what US trade policies and tariffs will be tomorrow, let alone by the time anyone currently planning a data center will actually be finished building it.

Also, when you say offshore, do you mean in other countries, or actually in the ocean? Assuming the former, I think that would imply using the data center by anyone in the US would be an import of services. If this starting happening at scale, I would expect the current administration to immediately begin applying tariffs to those services.

@Garrett Baker Yes electronics are exempt (for now?) but IIUC all the other stuff (HVAC, electrical, etc.) that goes into the data center is not, and that's often a majority or at least a high proportion of total costs.

Do you really expect that the project would then fail at the "getting funded"/"hiring personnel" stages?

Not at all, I'd expect them to get funded and get people. Plausibly quite well, or at least I hope so!

But when I think about paths by which such a company shapes how we reach AGI, I find it hard to see how that happens unless something (regulation, hitting walls in R&D, etc.) either slows the incumbents down or else causes them to adopt the new methods themselves. Both of which are possible! I'd just hope anyone seriously considering pursuing such a venture has thought through what success actually looks like. 

"Independently develop AGI through different methods before the big labs get there through current methods" is a very heavy lift that's downstream of but otherwise almost unrelated to "Could this proposal work if pursued and developed enough?" 

I think, "Get far enough fast enough to show it can work, show it would be safer, and show it would only lead to modest delays, then find points of leverage to get the leaders in capabilities to use it, maybe by getting acquired at seed or series A" is a strategy not enough companies go for (probably because VCs don't think its as good for their returns). 

  1. You're right, but creating unexpected new knowledge is not a PhD requirement. I expect it's pretty rare that a PhD students achieves that level of research.
  2. It wasn't a great explanation, sorry, and there are definitely some leaps, digressions, and hand-wavy bits. But basically: Even if current AI research were all blind mutation and selection, we already know that that can yield general intelligence from animal-level-intelligence because evolution did it. And we already have various examples of how human research can apply much greater random and non-random mutation, larger individual changes, higher selection pressure in a preferred direction, and more horizontal transfer of traits than evolution can, enabling (very roughly estimated) ~3-5 OOMs greater progress per generation with fewer individuals and shorter generation times.
  3. Saw your edit above, thanks.
Load More