Goodness maximizing as undefined without an arbitrary choice of values
By "(non-socially-constructed) Goodness" I mean the goodness of a state of affairs as it actually seems to that particular person really-deep-down. Which can have both selfish -- perhaps "arbitrary" from a certain perspective -- and non-selfish components.
I think the basic reason that it's hard to make an interesting QCA using this definition is that it's hard to make a reversible CA. Reversible cellular automata are typically made using block-partitioning or a second-order method. The (classical) laws of physics also seem to have a flavor more similar to these than a GoL-style CA, in that they have independent position and velocity coordinates which each determine the time evolution of the other.
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there's way more high-quality knowledge in those fields. Although "just dive in to AI" seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
People asked for a citation so here's one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein's annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newt...
but it seems that even on LW people think winning on a noisy N=1 sample is proof of rationality
It's not proof of a high degree of rationality but it is evidence against being an "idiot" as you said. Especially since the election isn't merely a binary yes/no outcome, we can observe that there was a huge republican blowout exceeding most forecasts(and in fact freddi bet a lot on republican pop vote too at worse odds, as well as some random states, which gives a larger update) This should increase our credence that predicting a republican win was rational....
Looks likely that tonight is going to be a massive transfer of wealth from "sharps"(among other people) to him. Post hoc and all, but I think if somebody is raking in huge wins while making "stupid" decisions it's worth considering whether they're actually so stupid after all.
>> 'a massive transfer of wealth from "sharps" '.
no. That's exactly the point.
1. there might no be any real sharps (=traders having access to real private arbitragiable information that are consistently taking risk-neutral bets on them) in this market at all.
This is because a) this might simple be a noisy, high entropy source that is inherently difficult to predict, hence there is little arbitragiable information and/or b) sharps have not been sufficiently incenticiz
2. The transfer of wealth is actually disappointing because Theo th...
That's why I said: "In expectation", "win or lose"
That the coinflip came out one way rather than another doesnt prove the guy had actual inside knowledge. He bought a large part of the shares at crazy odds because his market impact moved the price so much.
But yes, he could be a sharp in sheeps clothings. I doubt it but who knows. EDIT: I calculated the implied private odds for a rational Kelly bettor that this guy would have to have. Suffice to say these private odds seem unrealistic for election betting.
Point is that the winners contribute epistemics and the losers contribute money. The real winner is society [if the questions are about socially-relevant topics].
Good post, it's underappreciated that a society of ideally rational people wouldn't have unsubsidized, real-money prediction markets.
unless you've actually got other people being wrong even in light of the new actors' information
Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It's not a ...
I know of only two people who anticipated something like what we are seeing far ahead of time; Hans Moravec and Jan Leike
I didn't know about Jan's AI timelines. Shane Legg also had some decently early predictions of AI around 2030(~2007 was the earliest I knew about)
shane legg had 2028 median back in 2008, see e.g. https://e-discoveryteam.com/2023/11/17/shane-leggs-vision-agi-is-likely-by-2028-as-soon-as-we-overcome-ais-senior-moments/
I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president
Yeah, but it's not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that "a small group of people with overwhelming hard power" was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.
I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.
But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer's worldview I don't think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)
It wasn't specified but I think they strongly implied it would be that or something equivalently coercive. The "melting GPUs" plan was explicitly not a pivotal act but rather something with the required level of difficulty, and it was implied that the actual pivotal act would be something further outside the political Overton window. When you consider the ways "melting GPUs" would be insufficient a plan like this is the natural conclusion.
doesn't require replacing existing governments
I don't think you would need to replace existing governments. Just bl...
Just block all AI projects and maintain your ability to continue doing so in the future via maintaining military supremacy.
That to me is a very very non-central case of "take over the world", if it is one at all.
This is about "what would people think when they hear that description" and I could be wrong, but I expect "the plan is to take over the world" summary would lead people to expect "replace governments" level of interference, not "coerce/trade to ensure this specific policy" - and there's a really really big difference between the two.
"Taking over" something does not imply that you are going to use your authority in a tyrannical fashion. People can obtain control over organizations and places and govern with a light or even barely-existent touch, it happens all the time.
Would you accept "they plan to use extremely powerful AI to institute a minimalist, AI-enabled world government focused on preventing the development of other AI systems" as a summary? Like sure, "they want to take over the world" as a gist of that does have a bit of an editorial slant, but not that much of one. I think ...
Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+[...]
I think that AIS lobbying is likely to have more consequential and enduring effects on the world than environmental lobbying regardless of the absolute size in body count or amount of money, so yes.
"MIRI default plan" was "to do math in hope that some of this math will turn out to be useful".
I mean yeah, that is a better description of their publicly-known day-to-day actions, but intention also matters. They settled on math after it became clea...
Are you sure [...] et cetera are less power-seeking than AI Safety community?
Until recently the MIRI default plan was basically "obtain god-like AI and use it to take over the world"("pivotal act"), it's hard to get more power-seeking than that. Other wings of the community have been more circumspect but also more active in things like founding AI labs, influencing government policy, etc., to the tune of many billions of dollars worth of total influence. Not saying this is necessarily wrong but it does seem empirically clear that AI-risk-avoiders are mo...
My understanding of MIRI plan was "have a controllable, safe AI that's just powerful enough to take some action that prevents anyone else from building a more powerful and more dangerous AI". I wouldn't call that God-like or an intention to take over the world. The go-to [acknowledged as that plausible] example is "melt all the GPUs"] Your description feels grossly inaccurate.
I wonder if "brains" of the sort that are useful for math and programming are neccessarily all that helpful here. I think intuition-guided trial and error might work better. That's been my experience dealing with chronic-illness type stuff.
I think she meant he was looking for epistemic authority figures to defer to more broadly, even if it wasn't because he thought they were better at math than him.
Some advanced meditators report that they do perceive experience as being basically discrete, flickering in and out of existence at a very high frequency(which is why it might appear continuous without sufficient attention). See e.g. https://www.mctb.org/mctb2/table-of-contents/part-i-the-fundamentals/5-the-three-characteristics/
Tangentially related: some advanced meditators report that their sense that perception has a center vanishes at a certain point along the meditative path, and this is associated with a reduction in suffering.
I don't know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It's much easier to infer that it's likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal - testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs...
I think using the universal prior again is more natural. It's simpler to use the same complexity metric for everything; it's more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn't hold.
This is far too broadly stated, the actual message people will take away from an unexpected suit is verrrrry context-dependent, depending on (among other things) who the suit-wearer is, who the people observing are, how the suit-wearer carries himself, the particular situation the suit is worn in, etc. etc. etc. Judging from the post it sounds like those things create an overall favorable impression for lsusr?(it's hard to tell from just a post of course, but still)
Yeah, I started wearing a suit in specific contexts after many months of careful consideration. It's not random at all. Everything about it is carefully considered, from the number of buttons on my jacket to the color of my shoes.
I mostly wear it around artists. Artists basically never wear suits where I live, but they really appreciate them because ① artists are particularly sensitive to aesthetic fundamentals and ② artists like creative clothing.