That said, you can hide it in your user-settings.
This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.
This might just be me, but I really hate the floating action button on LW. It's an eyesore on what is otherwise a very clean website. The floating action button was designed to "Represent the primary action on a screen" and draw the user's attention to itself. It does a great job at it, but since "ask us anything, or share your feedback" is not the primary thing you'd want to do, it's distracting.
Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it ...
I bet this is a side effect of having a large pool of bounded rational agents that all need to communicate with each other, but not necessarily frequently. When two agents only interact briefly, neither agent has enough data to work out what the "meaning" of the other's words. Each word could mean too many different things. So you can probably show that under the right circumstances, it's beneficial for agents in a pool to have a protocol that maps speech-acts to inferences the other party should make about reality (amongst other things, such as other acti...
I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d... are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone's outcomes, utilities are comparable, and the number of people involved isn't too crazy.
Money makes the world turn and it enables research, be it academic or independent. I would just focus on getting a bunch of that. Send out 10x to 20x more resumes than you already have, expand your horizons to the entire planet, and put serious effort into prepping for interviews.
You could also try getting a position at CHAI or some other org that supports AI alignment PhDs, but it's my impression that those centres are currently funding constrained and already have a big list of very high quality applicants, so your presence or absence might not make that...
I'd put my money on lowered barriers to entry on the internet and eternal September effects as the primary driver of this. In my experience the people I interact with IRL haven't really gotten any stupider. People can still code or solve business problems just as well as they used to. The massive spike in stupidity seems to have occurred mostly on the internet.
I think this is because of 2 effects that reinforce each other in a vicious cycle.
Barriers to entry on the internet have been reduced. A long time ago you needed technical know how to even operate
At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively.
Am I the only one who sees this much less as a statement that the Solomonoff prior is malign, and much more a statement that reality itself is malign? I think that the proper reaction is not to use a different prior, but to build agents that are robust to the possibility that we live in a simulation run by influence seeking malign agents so that they don't end up like this.
Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself.
That's probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don't completely trust, and for most practical intents and purposes I'm quite a bit more "myself" online than IRL.
...But in any case there will many subnetworks in the net
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there's no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
...This might be another difference of personalities...you can try to come up with a differe
I'm breaking this into a separate thread since I think it's a separate topic.
Second, specifically regarding Crocker's rules, I'm not their fan at all. I think that you can be honest and tactful at the same time, and it's reasonable to expect the same from other people.
So I disagree. Obviously you can't impose Croker's rules on others, but I find it much easier and far less mentally taxing to communicate with people I don't expect to get offended. Likewise, I've gained a great deal of benefit from people very straightforwardly and bluntly calling me out...
First, when Jacob wrote "join the tribe", I don't think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you're saying here doesn't seem like an argument against my objection to Zack's objection to Jacob.
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
...Moreover, the relationships between them shouldn't be purely impersonal and intellectual. A
IMO, F*** or F!#@, I feel like it has more impact that way. Since it means you went out of your way to censor yourself, and it's not just a verbal habit, as would be the case with either fuck or a euphemism.
there are two sides to an options contract, when to buy and when to sell. Wei Dai did well on the first half but updated in the comments on losing most of the gains on the second half. This isn't a criticism, it's hard.
So full disclosure, I'm on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I've picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies' existing infrastructure for social support and other things. This is not to say we shouldn't have strong norms, the comment box I'm typing this in is reminding me of many of those norms right now. But the overall ...
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more l...
Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn't disbanding the tribe, that's throwing away the baby with the bathwater.
I think we need to be really careful with this and the dangers of becoming a "tribe" shouldn't be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications...
The problems you discuss are real, but I don't understand what alternative you're defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn't it better if it's a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It's far from perfect, but it's quite good compared t...
Another option not discussed is to control who your message reaches in the first place, and in what medium. I'll claim, without proof or citation, that social media sites like twitter are cesspits that are effectively engineered to prevent constructive conversation and to exploit emotions to keep people on the website. Given that, a choice that can mitigate these kind of situations is to not engage with these social media platforms in the first place. Post your messages on a blog under your own control or a social media platform that isn't designed to hijack your reward circuitry.
I think you're missing an option, though. You can specifically disavow and oppose the malicious actions/actors, and point out that they are not part of your cause, and are actively hurting it. No censorship, just clarity that this hurts you and the cause. Depending on your knowledge of the perpetrators and the crimes, backing this up by turning them or actively thwarting them may be in scope as well.
There is a practical issue with this solution in the era of modern social media. Suppose you have malicious actors who go on to act in your name, but you...
I haven't actually figured that out yet, but several people in this thread have proposed takeaways. I'm leaning towards "social engineering is unreasonably effective". That or something related to keeping a security mindset.
I personally feel that the fact that it was such an effortless attempt makes it more impressive, and really hammers home the lesson we need to take away from this. It's one thing to put in a great deal of effort to defeat some defences. It's another to completely smash through them with the flick of a wrist.
Props to whoever petrov_day_admin_account was for successfully red-teaming lesswrong.
Well, they did succeed, so for that they get points, but I think it was more due to a very weak defense on behalf of the victim rather than a very strong effort by petrov_day_admin_account.
Like, the victim could have noticed things like:
* The original instructions were sent over email + LessWrong message, but the phishing attempt was just LessWrong
* The original message was sent by Ben Pace, the latter by petrov_day_admin_account
* They were sent at different points in time, the latter of which was more correlated by the FB post that caused the ...
Agreed, this is probably the best lesson of all. If the buttons exist, they can be hacked or the decision makers can be socially engineered.
270 people might have direct access, but the entire world has indirect access.
As much as I hate to say it, I don't think that it makes much sense for the main hub of the rationalist movement to move away from Berkeley and the Bay Area. There are several rationalist adjacent organizations that are firmly planted in Berkely. The ones that are most salient to me are the AI and AI safety orgs. You have OpenAI, MIRI, CHAI, BAIR, etc. Some of these could participate in a coordinated move, but others are effectively locked in place due to their tight connections with larger institutions.
Ehh, Singapore is a good place to do business and live temporarily. But mandatory military service for all male citizens and second gen permanent residents, along with the work culture make it unsuitable as a permanent location to live. Not to mention that there's a massive culture gap between the rats and the Singaporeans.
I think the cooperative advantages mentioned here have really been overlooked when it comes to forecasting AI impacts, especially in slow takeoff scenarios. A lot of forecasts, like what WFLL, mainly posit AI's competing with each other. Consequently Molochian dynamics come into play and humans easily lose control of the future. But with these sorts of cooperative advantages, AIs are in an excellent position to not be subject to those forces and all the strategic disadvantages they bring with them. This applies even if an AI is "merely" at the human level....
Just use bleeding edge tech to analyze ancient knowledge from the god of information theory himself.
This paper seems to be a good summary and puts a lower bound on entropy of human models of english somewhere between 0.65 and 1.10 BPC. If I had to guess, the real number is probably closer 0.8-1.0 BPC as the mentioned paper was able to pull up the lower bound for hebrew by about 0.2 BPC. Assuming that regular english compresses to an average of 4* tokens per character, GPT-3 clocks in at 1.73/ln(2)/4 = 0.62 BPC. This is lower than the lower bound mentioned...
I'm OOTL, can someone send me a couple links that explain the game theory that's being referenced when talking about a "battle of the sexes"? I have a vague intuition from the name alone, but I feel this is referencing a post I haven't read.
Edit: https://en.wikipedia.org/wiki/Battle_of_the_sexes_(game_theory)
I'm gonna go with barely, if at all. When you wear a surgical mask and you breath in, a lot of air flows in from the edges, without actually passing through the mask, so the mask doesn't have very good opportunity to filter the air. At least with N95 and N99 mask, you have a seal around your face, and this forces the air through the filter. Your probably better off wearing a wet bandana or towel that's been tied in such a way as to seal around your face, but that might make it hard to breath.
I found this, which suggests that they're generally ineffective. ...
Yeah, I'll second the caution to draw any conclusions from this. Especially because this is macroeconomics.
https://en.wikipedia.org/wiki/Sectoral_balances
It is my understanding that this is broadly correct. It is also my understanding that this is not common knowledge.
One hypothesis I have is that even in the situation where there is no goal distribution and the agent has a single goal, subjective uncertainty makes powerful states instrumentally convergent. The motivating real world analogy being that you are better able to deal with unforeseen circumstances when you have more money.
I've gone through a similar phase. In my experience you eventually come to terms with those risks and they stop bothering you. That being said, mitigating x and s-risks has become one of my top priorities. I now spend a great deal of my own time and resources on the task.
I also found learning to meditate helps with general anxiety and accelerates the process of coming to terms with the possibility of terrible outcomes.
The way I was envisioning it is that if you had some easily identifiable concept in one model, e.g. a latent dimension/feature that corresponds to the log odd of something being in a picture, you would train the model to match the behaviour of that feature when given data from the original generative model. Theoretically any loss function will do as long as the optimum corresponds to the situation where your "classifier" behaves exactly like the original feature in the old model when both of them are looking at the same data.
In practice though, we're compu...
I think you can loosen (b) quite a bit if you task a separate model with "delineating" the concept in the new network. The procedure does effectively give you access to infinite data, so the boundary for the old concept in the new model can be as complicated as your compute budget allows. Up to and including identifying high level concepts in low level physics simulations.
I think the eventual solution here (and a major technical problem of alignment) is to take an internal notion learned by one model (i.e. found via introspection tools), back out a universal representation of the real-world pattern it represents, then match that real-world pattern against the internals of a different model in order to find the "corresponding" internal notion.
Can't you just run the model in a generative mode associated with that internal notion, then feed that output as a set of observations into your new model and see what lights up in i...
I think this is pretty straight forward to test. GPT-3 gives joint probabilities of string continuations given context strings.
Step 1: Give it 2 promps, one suggesting that it is playing the role of a smart person, and one where it is playing the roll of a dumb person.
Step 2: Ask the "person" a question that demonstrates that persons intelligence. (something like a math problem or otherwise)
Step 2: Write continuations where the person answers correctly and incorrectly
Step 3: Compare the relative probabilities GPT-3 assigns to each continuation given the p...
Hypothesis: Unlike the language models before it and ignoring context length issues, GPT-3's primary limitation is that it's output mirrors the distribution it was trained on. Without further intervention, it will write things that are no more coherent than the average person could put together. By conditioning it on output from smart people, GPT-3 can be switched into a mode where it outputs smart text.
Huh.
I did not believe you so went and checked the internet archive. Sure enough, all the old posts with a ToC are off center. I did not notice until now.
Nitpick, is there a reason why the margins are so large?
The content on the front page is noticeably off center to the right on 1440x900 monitors.
Edit: The content is noticeably off center to the right in general.
On the standardization and interoperability side of things. There's been effort to develop decentralized social media platforms and protocols. Most notably being the various platforms of the Fediverse. Together with opensource software, this let's people build large networks that keep the value of network effects while removing monopoly power. I really like the idea of these platforms, but due to the network monopoly of existing social media platforms I think they'll have great difficulty gaining traction.
Yeah, that's pretty pricy. Google is telling me that they can do 1 million characters/month for free using a wavenet. That might be good enough.
What's the going rate for audio recordings on Fiverr?
With the ongoing drama that is currently taking place. I'm worried that the rationalist community will find itself inadvertently caught up in the culture war. This might cause a large influx of new users who are more interested in debating politics than anything else on LW.
It might be a good idea to put a temporary moratorium/barriers on new signups to the site in the event that things become particularly heated.
Organizations, and entire nations for that matter, can absolutely be made to "feel fear". The retaliation just needs to be sufficiently expensive for the organization. Afterwards, it'll factor in the costs of that retaliation when deciding how to act. If the cost is large enough, it won't do things that will trigger retaliation.
There is no guarantee that it is learning particularly useful representations just because it predicts pixel-by-pixel well which may be distributed throughout the GPT,
Personally, I felt that that wasn't really surprising either. Remember that this whole deep learning thing started with exactly what OpenAI just did. Train a generative model of the data, and then fine tune it to the relevant task.
However, I'll admit that the fact that theres an optimal layer to tap into, and that they showed that this trick works specifically with transformer autoregressive models is novel to my knowledge.
This isn't news, we knew that sequence predictors could model images for almost a decade now and openAI did the same thing last year with less compute, but no one noticed.
Many of the users on LW have their real names and reputations attached to this website. If LW were to come under this kind of loosely coordinated memetic attack, many people would find themselves harassed and their reputations and careers could easily be put in danger. I don't want to sound overly dramatic, but the entire truth seeking and AI safety project could be hampered by association.
...That's why even though I remain anonymous, I think it's best if I refrain from discussing these topics at anything except the meta level on LW. Ev
Another alternative is to use a 440nm light source and a frequency doubling crystal https://phoseon.com/wp-content/uploads/2019/04/Stable-high-efficiency-low-cost-UV-C-laser-light-source-for-HPLC.pdf although the efficiency is questionable, there are also other variations based on frequency quadrupling https://opg.optica.org/oe/fulltext.cfm?uri=oe-29-26-42485&id=465709.