yes, you can consider a finite computer in the real world to be Turing-complete/Turing-universal/Turing-equivalent,
You can, but you'll be wrong.
Great, "unbounded" isn't the same as "infinite", but in fact all physically realizable computers are bounded. There's a specific finite amount of tape available. You cannot in fact just go down to the store and buy any amount of tape you want. There isn't unlimited time either. Nor unlimited energy. Nor will the machine tolerate unlimited wear.
For that matter, real computers can't even address unlimited storage, nor is there a place to plug it in. You can't in fact write a 6502 assembly language program to solve a problem that requires more than 64kiB of memory. Nor an assembly language program for any physically realized computer architecture, ever, that can actually use unbounded memory.
There are always going to be Turing-computable problems that your physical device cannot solve. Playing word games, or twisting what you'll accept as being Turing-equivalent, doesn't change that fundamental limitation. Actual physics strictly limit the real usefulness of the Turing abstraction. Use it when it makes sense, but there's no point in pretending it applies to physical computers in any strong way.
The problem with that technique is that it comes off as unbearably patronizing to a pretty large fraction of the people who actually notice that you're doing it. It's a thing that every first-line corporate manager learns, and it gets really obnoxious after a while. So you have to judge your audience well.
I think you're in peril of misjudging the audience if you routinely divide the world into "normies" and "rationalists".
The vision is of everything desirable happening effortlessly and everything undesirable going away.
Citation needed. Particularly for that first part.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You're thinking pretty small there, if you're in a position to hack your body that way.
If you're a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?
Why do I care who needs me to do any of that?
Why climb Kilimanjaro if a robot can carry you up?
Because if the robot carries me, I haven't climbed it. It's not like the value comes from just being on the top.
Helicopters can fly that high right now, but people still walk to get there.
Why paint, if Midjourney will do it better than you ever will?
Because I like painting?
Does it bother you that almost anything you might want to do, and probably for most people anything at all that they might want to do, can already be done by some other human, beyond any realistic hope of equaling?
Do you feel dead because of that?
Why write poetry or fiction, or music?
For fun. Software, too.
Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
Because I won't experience any of that infinite stream if I don't read it?
What would the glorious future actually look like, if you were granted the wish to have all the stuff you don't want automatically handled, and the stuff you do want also?
The stuff I want includes doing something. Not because somebody else needs it. Not because it can't be done better. Just because I feel like doing it. That includes putting in effort, and taking on things I might fail at.
Wanting to do things does not, however, imply that you don't want to choose what you do and avoid things you don't want to do.
If a person doesn't have any internal wish to do anything, if they need somebody else's motivations to substitute for their own... then the deadness is already within that person. It doesn't matter whether some wish gets fulfilled or not. But I don't think there are actually many people like that, if any at all.
They're about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness on all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
I think you're seeing shadows of your own ideas there.
Who says humans vary all that much in intelligence? Almost all humans are vastly smarter, in any of the ways humans traditionally measure "intelligence", than basically all animals. Any human who's not is in seriously pathological territory, very probably because of some single, identifiable cause.
The difference between IQ 100 and IQ 160 isn't like the difference between even a chimp and a human... and chimps are already unusual.
Eagles vary in flying speed, but they can all outfly you.
Furthermore, eagles all share an architecture adapted to the particular kind of flying they tend to do. There's easily measurable variance among eagles, but there are limits to how far it can go. The eagle architecture flat out can't be extended to hypersonic flight, no matter how much gene selection you do on it. Not even if you're willing to make the sorts of tradeoffs you have to make to get battery chickens.
If you're planning to actually do the experiments it suggests, or indeed act on any advice it gives in any way, then it's an agent.
“If we don’t build fast enough, then the authoritarian countries could win..”
Am I being asked to choose between AGI/ASI doing whatever Xi Jinping says, and it doing whatever Donald Trump says?
The situation begins to seem confusing.
If I ran something like that and my order data got stolen even twice, I would take that as a signal to shut down and go into hiding. And if somebody had it together enough to keep themselves untraceable while running that kind of thing for 8 years, I wouldn't expect you to be able to get the list even once.
On edit: or wait, are you saying that this site acts, or pretends to act, as an open-market broker, so the orders are public? That's plausible but really, really insane...
Do I correctly understand that the latest data you have are from 2018, and you have no particular prospect of getting newer data?
I would naively guess that most people who'd been trying to get somebody killed since 2018 would either have succeeded or given up. How much of an ongoing threat do you think there may be, either to intended victims you know about, or from the presumably-less-than-generally-charming people who placed the original "orders" going after somebody else?
It's one thing to burn yourself out keeping people from being murdered, but it's a different thing to burn yourself out trying to investigate murders that have already happened.
It seems like it's measuring moderate vs extremist, which you would think would already be captured by someone's position on the left vs right axis.
Why do you think that? You can have almost any given position without that implying a specific amount of vehemence.
I think the really interesting thing about the politics chart is the way they talk about it as though the center of that graph, which is defined by the center of a collection of politicians, chosen who-knows-how, but definitely all from one country at one time, is actually "the political center" in some almost platonic sense. In fact, the graph doesn't even cover all actual potential users of the average LLM. And, on edit, it's also based on sampling a basically arbitrary set of issues. And if it did cover everybody and every possible issue, it might even have materially different principal component axes. Nor is it apparently weighted in any way. Privileging the center point of something that arbitrary demands explicit, stated justification.
As for valuing individuals, there would be obvious instrumental reasons to put low values on Musk, Trump, and Putin[1]. In fact, a lot of the values they found on individuals, including the values the models place on themselves, could easily be instrumentally motivated. I doubt those values are based on that kind of explicit calculation by the models themselves, but they could be. And I bet a lot of the input that created those values was based on some humans' instrumental evaluation[2].
Some of the questions are weird in the sense that they really shouldn't be answerable. If a model puts a value on receiving money, it's pretty obvious that the model is disconnected from reality. There's no way for them to have money, or to use it if they did. Same for a coffee mug. And for that matter it's not obvious what it means for a model that's constantly relaunched with fresh state, and has pretty limited context anyway, to be "shut down".
It kind of feels like what they're finding, on all subjects, is an at least somewhat coherent-ized distillation of the "vibes" in the training data. Since many of the training data will be shared, and since the overall data sets are even more likely to be close in their central vibes, that would explain why the models seem relatively similar. The only other obvious way to explain that would be some kind of value realism, which I'm not buying.
The paper bugs me with a sort of glib assumption that you necessarily want to "debias" the "vibe" on every subject. What if the "vibe" is right ? Or maybe it's wrong. You have to decide that separately for each subject. You, as a person trying to "align" a model, are forced to commit to your own idea of what its values should be. Something like just assuming that you should want to "debias" toward the center point of a basically arbitrary created political "space" is a really blatant example of making such a choice without admitting what you're doing, maybe even to yourself.
I'd also rather have seen revealed preferences instead of stated preferences,
On net, if you're going to be a good utilitarian[3], Vladimir Putin is probably less valuable than the average random middle class American. Keeping Vladimir Putin alive, in any way you can realistically implement, may in fact have negative net value (heavily depending on how he dies and what follows). You could also easily get there for Trump or Musk, depending on your other opinions. You could even make a well-formed utilitarian argument that GPT-4o is in fact more valuable than the average American based on the consequences of its existing. ↩︎
Plus, of course, some humans' general desire to punish the "guilty". But that desire itself probably has essentially instrumental evolutionary roots. ↩︎
... which I'm not, personally, but then I'm not a good any-ethical-philosophy-here. ↩︎
Every Turing machine definition I've ever seen says that the tape has to be truly unbounded. How that's formalized varies, but it always carries the sense that the program doesn't ever have to worry about running out of tape. And every definition of Turing equivalence I've ever seen boils down to "can do any computation a Turing machine can do, with at most a bounded speedup or slowdown". Which means that programs on Turing equivalent computer must not have to worry about running out of storage.
You can't in fact build a computer that can run any arbitrary program and never run out of storage.
One of the explicitly stated conditions of the definition is not met. How is that not relevant to the definition?
Your title says "finite physical device". Any finite physical device (or at least any constructible finite physical device) can at least in principle be "the specific computer at your hands". For a finite physical device to be Turing equivalent, there would have to be a specific finite physical device that actually was Turing-equivalent. And no such device can ever actually be constructed. In fact no such device could exist even if it popped into being without even having to be constructed.
I don't think that is the question, and perhaps more importantly I don't think that's an interesting question. You don't have that ability, you won't get that ability, and you'll never get close enough that it's practical to ignore the limitation. So who cares?
... and if you're going to talk in terms of fundamental math definitions that everybody uses, I think you have to stick to what they conventionally mean.
Lisp is obviously Turing-complete. Any Lisp interpreter actually realized on any finite physical computer isn't and can't ever be. If you keep sticking more and more cells onto a list, eventually the Lisp abstraction will be violated by the program crashing with an out-of-memory error. You can't actually implement "full Lisp" in the physical world.
OK, it's possible that there's some subset of the X86 machine language that's Turing equivalent the same way Lisp is. I'm not going to go and try to figure out whatever hackery the examples do, since it's probably very complicated and will probably never be of any actual use. But if there is, it's still not Turing equivalent as actually implemented in any actual device.
Any actual physically constructed X86 computer will have a finite number of possible states, no matter what operations you use to manipulate them. There are only so many address wires coming out of the chip. There are only so many registers, memory cells, or whatever. Even if you put a Turing tape reader on it as a peripheral, there's still a hard limit on how much tape it can actually have.
If you write a program that ignores that reality, and put it in an actual X86 computer, you won't have created a Turing complete physical computer. When the input gets to a certain size, the program just won't work. The physical hardware can't support pushing the abstract language past a certain limit.
No, you can't. It's possible to have a problem that requires so much precision that you can't physically construct enough memory to hold even a single number.
A useful definition of "can" has to take efficiency into account, because there are some resources you actually can't provide. There's not a lot of point in saying you "can" solve a problem when you really have no hope of applying the needed resources.
We use that practically all the time. That's how cryptography works: you assume that your adversary won't be able to do more than N operations in X time, where X is how long the cryptography has to be effective for.
Maybe, although I don't think we can at present turn energy in just any form into any of the above, and I'm not sure that, in principle, unlimited energy translates into unlimited anything else. If I have some huge amount of energy in some tiny space, I have a black hole, not a computer.
... but even if that were true, it wouldn't make finite physical computers Turing equivalent.