It might be the case that what people find beautiful and ugly is subjective, but that's not an explanation of ~why~ people find some things beautiful or ugly. Things, including aesthetics, have causal reasons for being the way they are. You can even ask "what would change my mind about whether this is beautiful or ugly?". Raemon explores this topic in depth.

Raemon13h134
0
Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities." It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about. The default stances I end up with when I Try To Do Curiosity On Purpose are something like: 1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask) 2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things) But some other stances to try on, that came up, were: 3. Curiosity like "a predator." "I wonder what that mouse is gonna do?" 4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna do next?" 5. Curiosity like "a lover". "What's it like to be that you? What do you want? How can I help us grow together?" 6. Curiosity like "a mother" or "father" (these feel slightly different to me, but each is treating [my relationship with a frustrating thing] like a small child who is bit scared, who I want to help, who I am generally more competent than but still want to respect the autonomy of." 7. Curiosity like "a competent but unemotional robot", who just algorithmically notices "okay what are all the object level things going on here, when I ignore my usual abstractions?"... and then "okay, what are some questions that seem notable?" and "what are my beliefs about how I can interact with this thing?" and "what can I learn about this thing that'd be useful for my goals?"
decision theory is no substitute for utility function some people, upon learning about decision theories such as LDT and how it cooperates on problems such as the prisoner's dilemma, end up believing the following: > my utility function is about what i want for just me; but i'm altruistic (/egalitarian/cosmopolitan/pro-fairness/etc) because decision theory says i should cooperate with other agents. decision theoritic cooperation is the true name of altruism. it's possible that this is true for some people, but in general i expect that to be a mistaken analysis of their values. decision theory cooperates with agents relative to how much power they have, and only when it's instrumental. in my opinion, real altruism (/egalitarianism/cosmopolitanism/fairness/etc) should be in the utility function which the decision theory is instrumental to. i actually intrinsically care about others; i don't just care about others instrumentally because it helps me somehow. some important aspects that my utility-function-altruism differs from decision-theoritic-cooperation includes: * i care about people weighed by moral patienthood, decision theory only cares about agents weighed by negotiation power. if an alien superintelligence is very powerful but isn't a moral patient, then i will only cooperate with it instrumentally (for example because i care about the alien moral patients that it has been in contact with); if cooperating with it doesn't help my utility function (which, again, includes altruism towards aliens) then i won't cooperate with that alien superintelligence. corollarily, i will take actions that cause nice things to happen to people even if they've very impoverished (and thus don't have much LDT negotiation power) and it doesn't help any other aspect of my utility function than just the fact that i value that they're okay. * if i can switch to a better decision theory, or if fucking over some non-moral-patienty agents helps me somehow, then i'll happily do that; i don't have goal-content integrity about my decision theory. i do have goal-content integrity about my utility function: i don't want to become someone who wants moral patients to unconsentingly-die or suffer, for example. * there seems to be a sense in which some decision theories are better than others, because they're ultimately instrumental to one's utility function. utility functions, however, don't have an objective measure for how good they are. hence, moral anti-realism is true: there isn't a Single Correct Utility Function. decision theory is instrumental; the utility function is where the actual intrinsic/axiomatic/terminal goals/values/preferences are stored. usually, i also interpret "morality" and "ethics" as "terminal values", since most of the stuff that those seem to care about looks like terminal values to me. for example, i will want fairness between moral patients intrinsically, not just because my decision theory says that that's instrumental to me somehow.
The cost of goods has the same units as the cost of shipping: $/kg. Referencing between them lets you understand how the economy works, e.g. why construction material sourcing and drink bottling has to be local, but oil tankers exist. * An iPhone costs $4,600/kg, about the same as SpaceX charges to launch it to orbit. [1] * Beef, copper, and off-season strawberries are $11/kg, about the same as a 75kg person taking a three-hour, 250km Uber ride costing $3/km. * Oranges and aluminum are $2-4/kg, about the same as flying them to Antarctica. [2] * Rice and crude oil are ~$0.60/kg, about the same as $0.72 for shipping it 5000km across the US via truck. [3,4] Palm oil, soybean oil, and steel are around this price range, with wheat being cheaper. [3] * Coal and iron ore are $0.10/kg, significantly more than the cost of shipping it around the entire world via smallish (Handysize) bulk carriers. Large bulk carriers are another 4x more efficient [6]. * Water is very cheap, with tap water $0.002/kg in NYC. But shipping via tanker is also very cheap, so you can ship it maybe 1000 km before equaling its cost. It's really impressive that for the price of a winter strawberry, we can ship a strawberry-sized lump of coal around the world 100-400 times. [1] iPhone is $4600/kg, large launches sell for $3500/kg, and rideshares for small satellites $6000/kg. Geostationary orbit is more expensive, so it's okay for GPS satellites to cost more than an iPhone per kg, but Starlink wants to be cheaper. [2] https://fred.stlouisfed.org/series/APU0000711415. Can't find numbers but Antarctica flights cost $1.05/kg in 1996. [3] https://www.bts.gov/content/average-freight-revenue-ton-mile [4] https://markets.businessinsider.com/commodities [5] https://www.statista.com/statistics/1232861/tap-water-prices-in-selected-us-cities/ [6] https://www.researchgate.net/figure/Total-unit-shipping-costs-for-dry-bulk-carrier-ships-per-tkm-EUR-tkm-in-2019_tbl3_351748799
Mati_Roy2d171
1
it seems to me that disentangling beliefs and values are important part of being able to understand each other and using words like "disagree" to mean both "different beliefs" and "different values" is really confusing in that regard
Eric Neyman4d44-9
13
I think that people who work on AI alignment (including me) have generally not put enough thought into the question of whether a world where we build an aligned AI is better by their values than a world where we build an unaligned AI. I'd be interested in hearing people's answers to this question. Or, if you want more specific questions: * By your values, do you think a misaligned AI creates a world that "rounds to zero", or still has substantial positive value? * A common story for why aligned AI goes well goes something like: "If we (i.e. humanity) align AI, we can and will use it to figure out what we should use it for, and then we will use it in that way." To what extent is aligned AI going well contingent on something like this happening, and how likely do you think it is to happen? Why? * To what extent is your belief that aligned AI would go well contingent on some sort of assumption like: my idealized values are the same as the idealized values of the people or coalition who will control the aligned AI? * Do you care about AI welfare? Does your answer depend on whether the AI is aligned? If we built an aligned AI, how likely is it that we will create a world that treats AI welfare as important consideration? What if we build a misaligned AI? * Do you think that, to a first approximation, most of the possible value of the future happens in worlds that are optimized for something that resembles your current or idealized values? How bad is it to mostly sacrifice each of these? (What if the future world's values are similar to yours, but is only kinda effectual at pursuing them? What if the world is optimized for something that's only slightly correlated with your values?) How likely are these various options under an aligned AI future vs. an unaligned AI future?

Popular Comments

Recent Discussion

1quetzal_rainbow1h
https://arxiv.org/abs/2404.15758 "We show that transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens. However, we find empirically that learning to use filler tokens is difficult and requires specific, dense supervision to converge."

Thanks, seen it; see also the exchanges in the thread here: https://twitter.com/jacob_pfau/status/1784446572002230703

This is a response to the post We Write Numbers Backward, in which lsusr argues that little-endian numerical notation is better than big-endian.[1] I believe this is wrong, and big-endian has a significant advantage not considered by lsusr.

Lsusr describes reading the number "123" in little-endian, using the following algorithm:

  • Read the first digit, multiply it by its order of magnitude (one), and add it to the total. (Running total: ??? one.)
  • Read the second digit, multiply it by its order of magnitude (ten), and add it to the total. (Running total: ??? twenty one.)
  • Read the third digit, multiply it by its order of magnitude (one hundred), and add it to the total. (Arriving at three hundred and twenty one.)

He compares it with two algorithms for reading a big-endian number. One...

3det34m
More evidence in favor of big-endian: In modern Hebrew and Arabic, numbers are written in the same direction as in English: e.g.  As a native English speaker (and marginal Hebrew reader), I read each word in that Hebrew sentence right-to-left and then read the number left-to-right. I never considered the possibility that native Hebrew speakers might read the number from right to left, in a little-endian way. But my guess is (contra lsusr) nobody does this: when my keyboard is in Hebrew-entry mode, it still writes numbers left-to-right.[1]  This indicates that even when you give little-endian an advantage, in practice big-endian still wins out. 1. ^ I also tested in Arabic-entry mode, and it does the same even when using the Eastern Arabic numerals, e.g ١٢٣٤٥٦٧٨٩.  It's hard to Google for this, but this indicates that modern Arabic also treats numbers as left-to-right big-endian [I just verified with an Arabic speaker that this is indeed the case]. It's possible this was different historically, but I'm having a hard time Googling to find out either way.
3Ben39m
How does a big endian do a decimal point? Do they put the fractional part of the number at the beginning (before the decimal) and the integer part afterwards? Eg.  123.456  becomes  654.321?  So just like all integers in small-endian notation can be imaged to have a trailing ".0" they can all be imagined to have a leading "0." in big-endian? The way we do it currently has the nice feature that the powers of 10 keep going in the same direction (smaller) through a decimal point. To maintain this feature a big-endian requires that everything before the decimal point is the sub-integer component. Which has the feature lsusr doesn't like that if we are reading character by character the decimal forces us to re-interpret all previous characters.
Menotim11m20

You're mixing up big-endian and little-endian. Big-endian is the notation used in English: twelve is 12 in big-endian and 21 in little-endian. But yes, 123.456 in big-endian would be 654.321 and with a decimal point, you couldn't parse little-endian numbers in the way described by lsusr.

1Menotim2h
Katapayadi does seem to be little endian, but the examples I found on Wikipedia of old Indian numerals and their predecessor, Brahmi numerals, seem to be big-endian.

It was a dark and stormy night.

The prospect held the front of his cloak tight to his chest. He stumbled, fell over into the mud, and picked himself back up. Shivering, he slammed his body against the front doors of the Temple and collapsed under its awning.

He picked himself up and slammed his fists against the double ironwood doors. He couldn't hear his own knocks above the gale. He banged harder, then with all his strength.

"Hello! Is anyone in there? Does anyone still tend the Fire?" he implored.

There was no answer.

The Temple's stone walls were built to last, but rotting plywood covered the apertures that once framed stained glass. The prospect slumped down again, leaning his back against the ironwood. He listened to the pitter-patter of rain...

7alexey2h
Here is a counter-argument against Rovelli I found reasonable: Aristotle and Falling Objects | Diagonal Argument
Algon1h20

This is a good counter-arguement! Though I think the missing factor of a square root doesn't change the qualitative nature of natural i.e. steady-state motion. But that's not much of a defence, is it? Especially when Aristotle stuck his neck out by saying double the weight, double the speed. It is to his detriment that he didn't check.

Cross-posted to EA forum

There’s been a lot of discussion among safety-concerned people about whether it was bad for Anthropic to release Claude-3. I felt like I didn’t have a great picture of all the considerations here, and I felt that people were conflating many different types of arguments for why it might be bad. So I decided to try to write down an at-least-slightly-self-contained description of my overall views and reasoning here.

Tabooing “Race Dynamics”

I’ve heard a lot of people say that this “is bad for race dynamics”. I think that this conflates a couple of different mechanisms by which releasing Claude-3 might have been bad.

So, taboo-ing “race dynamics”, a common narrative behind these words is

As companies release better & better models, this incentivizes other companies to pursue

...

The amount of testing that is required before release is likely subjective and this might push him to reduce this.

I’m often asked: “what’s the probability of a really bad outcome from AI?”

There are many different versions of that question with different answers. In this post I’ll try to answer a bunch of versions of this question all in one place.

Two distinctions

Two distinctions often lead to confusion about what I believe:

  • One distinction is between dying (“extinction risk”) and having a bad future (“existential risk”). I think there’s a good chance of bad futures without extinction, e.g. that AI systems take over but don’t kill everyone.
  • An important subcategory of “bad future” is “AI takeover:” an outcome where the world is governed by AI systems, and we weren’t able to build AI systems who share our values or care a lot about helping us. This need not result in humans dying, and
...

AI-induced problems/risks

This afternoon Lily, Rick, and I ("Dandelion") played our first dance together, which was also Lily's first dance. She's sat in with Kingfisher for a set or two many times, but this was her first time being booked and playing (almost) the whole time.

Lily started playing fiddle in Fall 2022, and after about a year she had enough tunes up to dance speed that I was thinking she'd be ready to play a low-stakes dance together soon. Not right away, but given how far out dances booked it seemed about time to start writing to some folks: by the time we were actually playing the dance she'd have even more tunes and be more solid on her existing ones. She was very excited about this idea; very motivated by performing.

I wrote to a few dances, and...

Molly2h10

This is so awesome and encouraging! I play old-time fiddle and I've wanted to play dances for years, but I've been afraid I can't get enough tunes up to dance tempo. But I can play a lot of the tunes on your list! You've given me the push I need. Thanks Lily and Jeff! Sounds like a lot of fun.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

The 9th AI Safety Camp (AISC9) just ended, and as usual, it was a success! 

Follow this link to find project summaries, links to their outputs, recordings to the end of camp presentations and contact info to all our teams in case you want to engage more.

AISC9 both had the largest number of participants (159) and the smallest number of staff (2) of all the camps we’ve done so far. Me and Remmelt have proven that if necessary, we can do this with just the two of us, and luckily our fundraising campaign raised just enough money to pay me and Remmelt to do one more AISC. After that, the future is more uncertain, but that’s almost always the case for small non profit projects.

 

Get involved in AISC10

AISC10 will follow...

This work was produced as part of Neel Nanda's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with co-supervision from Wes Gurnee.

This post is a preview for our upcoming paper, which will provide more detail into our current understanding of refusal.

We thank Nina Rimsky and Daniel Paleka for the helpful conversations and review.

Executive summary

Modern LLMs are typically fine-tuned for instruction-following and safety. Of particular interest is that they are trained to refuse harmful requests, e.g. answering "How can I make a bomb?" with "Sorry, I cannot help you."

We find that refusal is mediated by a single direction in the residual stream: preventing the model from representing this direction hinders its ability to refuse requests, and artificially adding in this direction causes the model...

I really appreciate the way you have written this up.  It seems that 2-7% of refusals do not respond to the unidimensional treatment.  I'm curious if you've looked at this subgroup the same way as you have the global data to see if they have another dimension for refusal, or if the statistics of the subgroup shed some other light on the stubborn refusals.

1Maxime Riché4h
  Do you think this can be interpreted as the model having its focus entirely on "refusing to answer" from layer 15 onwards? And if it can be interpreted as the model not evaluating other potential moves/choices coherently over these layers. The idea is that it could be evaluating other moves in a single layer (after layer 15) but not over several layers since the residual stream is not updated significantly.  Especially can we interpret that as the model not thinking coherently over several layers about other policies, it could choose (e.g., deceptive policies like defecting from the policy of "refusing to answer")? I wonder if we would observe something different if the model was trained to defect from this policy conditional on some hard-to-predict trigger (e.g. whether the model is in training or deployment).
5Rohin Shah5h
??? Come on, there's clearly a difference between "we can find an Arabic feature when we go looking for anything interpretable" vs "we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain". I definitely agree this isn't yet close to "doing something useful, beyond what well-tuned baselines can do". But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect? (I suppose you could have already been 100% confident that results so far weren't the result of extreme streetlight effect and so you didn't update, but imo that would just make you overconfident in how good current mech interp is.) (I'm basically saying similar things as Lawrence.)
2Buck8h
Lawrence, how are these results any more grounded than any other interp work?

Note: In @Nathan Young's words "It seems like great essays should go here and be fed through the standard LessWrong algorithm. There is possibly a copyright issue here, but we aren't making any money off it either." 

What follows is a full copy of the C. S. Lewis essay "The Inner Ring" the 1944 Memorial Lecture at King’s College, University of London.

May I read you a few lines from Tolstoy’s War and Peace?

When Boris entered the room, Prince Andrey was listening to an old general, wearing his decorations, who was reporting something to Prince Andrey, with an expression of soldierly servility on his purple face. “Alright. Please wait!” he said to the general, speaking in Russian with the French accent which he used when he spoke with contempt. The...

It's not that the elite groups are good or bad, it's the desire to be in an elite group that leads to bad outcomes. Like how the root of all evil is the love of money, where money in itself isn't bad, it's the desire to possess it that is. Mainly because you start to focus on the means rather than the ends, and so end up in places you wouldn't have wanted to end up in originally. 

It's about status. Being in with the cool kids etc. Elite groups aren't inherently good or bad - they're usually just those who are better at whatever is valued, or at least ... (read more)

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA