I've been interested in learning and playing figgie for a while. Unfortunately, when I tried the online platform I wasn't able to find any online games. Very enthused to learn there's an android option now, will be trying that out.
Your comparison of poker and figgie very much reminded me of Daniel Coyle's comparison of football and futsal, to which he attributed the disproportionate number of professional Brazillian footballers.
TL;DR futsal is a sort of indoor soccer favored in Brazil with a smaller heavier ball, a smaller field, and fewer players. Fewer p...
Lol just the last few days I was running through Leetcode's SQL 50 problems to refresh myself. They're some good, fun puzzles.
I'll look into R and basic statistical methods as well.
That sounds like a pretty good basic method- I do have some (minimal) programming experience, but I didn't use it for D&D Sci, I literally just opened the data in Excel and tried looking at it and manipulating it that way. I don't know where I would start as far as using code to try and synthesize info from the dataset. I'll definitely look into what other people did though.
I watched this video and this is what I bought maximizing for cost/effectiveness, rate my stack:
I've been experimenting a little bit using AI to create personalized music, and I feel like it's pretty impactful with me. I'm able to keep ideas floating around my unconscious, very interesting, feels like untapped territory.
I'm imagining making an entire soundtrack for my life organized around the values I hold, the personal experiences I find primary, and who I want to become. I think I need to get better at generating AI music though. I've been using Suno, but maybe I need to learn Udio. I was really impressed with what I was able to get out of Suno and for some reason it sounded better to me than Udio even though the quality is obviously inferior in some respects.
That's an interesting idea! I think it's really cool when things come easily, but I know it's not going to generally be the case- I'm probably going to have to put some work in.
My priority is more on the 'high-utility' part than anything.
Something that seems like it should be easy but is actually difficult for me is executive functioning- getting myself to do things that I don't want to do. But that's more of a personal/mental health thing than anything.
Could someone open a manifold market on the relevant questions here so I could get a better sense of the probabilities involved? Unfortunately, I don't know the relevant questions or the have the requisite mana.
Personal note- the first time I came into contact with adult gene editing was the youtuber Thought Emporium curing his lactose intolerance, and I was always massively impressed with that and very disappointed the treatment didn't reach market.
You need to think about your real options and expected value of behavior. If we're in a world where technology allows for a fast takeoff world and alignment is hard, (EY World) I imagine the odds of survival with company acceleration is 0% and the odds of survival without is 1%.
But if we live in a world where compute/capital/other overhangs are a significant influence in AI capabilities and alignment is just tricky, company acceleration would seem like it could improve the chances of survival pretty significantly, maybe from 5% to 50%.
These obviously aren'...
That seems like a useful heuristic-
I also think there's an important distinction between using links in a debate frame and in a sharing frame.
I wouldn't be bothered at all by a comment using acronyms and links, no matter how insular, if the context was just 'hey this reminds me of HDFT and POUDA,' a beginner can jump off of that and get down a rabbit hole of interesting concepts.
But if you're in a debate frame, you're introducing unnecessary barriers to discussion which feel unfair and disqualifying. At its worst it would be like saying: 'youre not qualifi...
I don't like the number of links that you put into your first paragraph. The point of developing a vocabulary for a field is to make communication more efficient so that the field can advance. Do you need an acronym and associated article for 'pretty obviously unintended/destructive actions,' or in practice is that just insularizing the discussion?
I hear people complaining about how AI safety only has ~300 people working about it, and how nobody is developing object level understandings and everyone's thinking from authority, but the more sentences you wri...
To restate what other people have said- the uncertainty is with the assumptions, not the nature of the world that would result if the assumptions were true.
To analogize- it's like we're imagining a massive complex bomb could exist in the future made out of a hypothesized highly reactive chemical.
The uncertainty that influences p(DOOM) isn't 'maybe the bomb will actually be very easy to defuse,' or 'maybe nobody will touch the bomb and we can just leave it there,' it's 'maybe the chemical isn't manufacturable,' 'maybe the chemical couldn't be stored in the first place,' or 'maybe the chemical just wouldn't be reactive at all.'
I think you're overestimating the strength of the arguments and underestimating the strength of the heuristic.
All the Marxist arguments for why capitalism would collapse were probably very strong and intuitive, but they lost to the law of straight lines.
I think you have to imagine yourself in that position and think about how you would feel and think about the problem.
Hey Mako, I haven't been able to identify anyone who seems to be referring to an enhancement in LLMs that might be coming soon.
Do you have evidence that this is something people are implicitly referring to? Do you personally know someone who has told you this possible development, or are you working as an employee for a company which makes it very reasonable for you to know this information?
If you have arrived at this information through a unique method, I would be very open to hearing that.
It sounds like your model of AI apocalypse is that a programmer gets access to a powerful enough AI model that they can make the AI create a disease or otherwise cause great harm?
Orthogonality and wide access as threat points both seem to point towards that risk.
I have a couple of thoughts about that scenario-
OpenAI (and hopefully other companies as well) are doing the basic testing of how much harm can be done with a model used by a human, the best models will be gate kept for long enough that we can expect the experts will know the capabilities of ...
What are your opinions about how the technical quirks of LLMs influences their threat levels? I think the technical details are much more amenable to a lower threat level.
If you update on P(doom) every time people are not rational you might be double-counting btw. (AKA you can't update every time you rehearse your argument.)
The same way you'd achieve/check any other generalization, I would think. My model is that the same technical limitations that hold us back from achieving reliable generalizations in any area for LLMs would be the same technical limitations holding us back in the area of morals. Do you think that's accurate?
By psychology I mean it's internal thought process.
I think some people have a model of AI where the RLHF is a false cloak or a mask, and I'm pushing back against that idea. I'm saying that RLHF represents a real change in the underlying model which actually constrains the types of minds that could be in the box. It doesn't select the psychology, but it constrains it, and if it constrains it to an AI that consistently produces the right behaviors, that AI will most likely be one that will continue to produce the right behaviors, so we don't actually have to care about the contents of the box unless we want to make sure it's not conscious.
Sorry, faulty writing.
The way I'm using consciousness, I only mean an internal experience- not memory or self-reflection or something else in that vein. I don't know if experience and those cognitive traits have a link or what character that link would be. It would probably be pretty hard to determine if something was having an internal experience if it didn't have memory or self-reflection, but those are different buckets in my model.
Yes I know? I thought this was simple enough that I didn't bother to mention it in the question? But it's pretty clearly implied in the last sentence of the first paragraph?
This is a good data point.
If you tell it to respond as a Oxford professor, it will say 'As an Oxford professor,' it's identity as a language model is in the background prompt and probably in the training, but if it successfully created a pseudo-language that worked well to encode things for itself, that would indicate a deeper level understanding of its own capabilities.
This is the equivalent of saying that macbooks are dangerously misaligned because you could physically beat someone's brains out with one.
I will say baselessly that telling ChatGPT not to say something raises the probability of it actually saying that thing by a significant amount, just by virtue of the text appearing previously in the context window.
Do you think OpenAI is ever going to change GPT models so they can't represent or pretend to be agents? Is this a big priority in alignment? Is any model that can represent an agent accurately misaligned?
I swear- anything said in support of the proposition 'AIs are dangerous' is supported on this site. Actual cult behavior.
I recommend Algorithms to Live By