All of madhatter's Comments + Replies

Someone on Hacker News had the idea of putting COVID patients on an airplane to increase air pressure (which is part of how ventilators work, due to Fick's law of diffusion).

Could this genuinely work?

2Chris Hibbert
Airplanes pressurize to levels that aren't as high pressure as being on the ground, I'm pretty sure. They're trying to reduce the consequences of being at altitude, not increase above sea level.

Hey, I'm a student at the University of Copenhagen in Bioinformatics/Computer Science and I'd like to help any way I can. If there's anything I can do to help let me know.

Is there no way to actually delete a comment? :)

[This comment is no longer endorsed by its author]Reply
0Viliam
Not after someone already replied to it, I think. Without replies, you need to retract it, then refresh the page, and then there is a Delete button.

never mind this was stupid

[This comment is no longer endorsed by its author]Reply
0turchin
In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world - will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes. Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created - or both. Basically, it means the creation of the world government, design especially to contain AI. It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.
1WalterL
The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don't instantiate anything with any non-negligible level of sentience?

1turchin
If it is a question to me, I think no. It may be simpler to make bad AI able to kill anyone in the next decade than to solve nature of consciousness and thus learn if any AI actually have any subjective experiences. I have been working 2 years ago on a plan of research of the nature of consciousness, but later mostly abandoned it as I think it is not an urgent question.

Two random questions.

1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?

2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?

1ChristianKl
The kind of money that projects like DeepMind or OpenAI cost seem to be within the budget of a Russian billionaire who strongly cares about the issue. But there seem to be many countries that are stronger than Russia: https://futurism.com/china-has-overtaken-the-u-s-in-ai-research/
0MrMind
On 2, I'd say not really: fuzzy logic is a logic which has a continuum of truth values. Logical uncertainty works by imposing, on classical logic, a probability assignment that is as "nice" as possible.
0[anonymous]
1) low chance 2) no connection

Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.

0whpearson
Did you see this thread on making an on-line course. It is probably a place to co-ordinate this sort of thing.

Thoughts on Timothy Snyder's "On Tyranny"?

Anything not too technical about nanotechnology? (Current state, forecasts, etc.)

0darius
Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It's in a much less exciting style than his first book.
0gilch
Engines of Creation is the classic. It's much less technical than Nanosystems.

Well, "The set of all primes less than 100" definitely works, so we need to shorten this.

0Thomas
It doesn't work, it's a trivial solution. Get rid of the word prime. Rephrase! But even if that was a solution, it's not the shortest one.

More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.

I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It's an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.

Wow, I hadn't thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won't start an arms race until we've figured out how to align them. Maybe we want the issue to remain largely a laughingstock.

Sure. The ideas aren't fleshed out yet, just thrown out there:

http://lesswrong.com/r/discussion/lw/oyi/open_thread_may_1_may_7_2017/

Stuart, since you're an author of the paper, I'd be grateful to know what you think about the ideas for variants that MrMind suggested in the open thread, as well as my idea of a government regulator parameter.

0Stuart_Armstrong
Can you link me to those posts?

One idea I had was to introduce a parameter indicating the actions of a governmental regulatory agency. Does this seems like a good variant?

Hi all,

A friend and I (undergraduate math majors) want to work on either exploring a variant or digging deeper into the model introduced in this paper:

http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Any ideas?

1MrMind
Some ideas: * different enmities for different teams * team capabilities drawn from a normal distribution * utility that does not depend from safety or enmity * only some team know their own / others capability * the probability of winning the race depending on capability, not simply the highest capable team being the winner etc. Also, what it says in note 2.
0madhatter
One idea I had was to introduce a parameter indicating the actions of a governmental regulatory agency. Does this seems like a good variant?
0Thomas
Could be. Seems possible, likely even.

It becomes uncomfortable for me to stay in bed more than about half an hour after waking up.

0[anonymous]
This matches my own experience.

Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?

1ZankerH
My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.
1gilch
Don't farm crickets. Seriously, that's about all we can do in the short term. We can try to not make the problem worse. Fixing this completely is likely a post-singularity problem. Thus, EA should invest in MIRI. We can't feasibly eradicate all the insects now--it's been said that cockroaches would survive a global nuclear war. And even if we could, it would mean the extinction of the human species. We're too dependent on them ecologically. If we tried, we'd likely kill ourselves before we got all of them, then the suffering wouldn't end until the sun eventually heats up enough to burn up the biosphere. Patience now is the better strategy. It ends the suffering sooner. Someone might suggest gene drives, so I'll address that too. We can't use them for eradication of all insect species. Some of them would likely develop resistance first, so we'd have to be very persistent. But we humans wouldn't last that long. What might work is to alter insects genetically so they don't suffer. If we can figure out how to do this we could then try to put the modification on the gene drive, but this is also very risky. Messing with the pain systems might inadvertently make suffering worse, but also make it less obvious. Nature invented pain for reasons. Turning it off would likely put those insects affected at a selective disadvantage. Suffering might evolve again after we get rid of it. Scaling the drive could unbalance the ecology and thereby damage human populations enough that we couldn't continue the project. It would take a great deal of research to pull this off. Short of an intelligence explosion, we'd have to genetically engineer an artificial ecology that can sustainably support human life in outer space, but doesn't suffer. We'd then have the capability to move human civilization off-planet (very expensive), and then use giant space mirrors to start a runaway greenhouse effect that makes Earth look like Venus, finally eradicating the old miserable biosphere. This woul
2eukaryote
There's a lot of uncertainty in this field. I would hope to see a lot of people very quickly shift a lot of effort into researching: * Effective interventions for reducing the number of insects in the environment (without, e.g., crashing the climate) * Comparative effects of different kinds of land use (e.g. farming crops or vegetables, pasture, left wild, whatever) on insect populations * Ability of various other invertebrates to suffer (how about plankton, or nematodes? The same high-confidence evidence showing insects suffer might also show the same for their smaller, more numerous cousins) * Shifting public perceptions of gene drives * Research into which pesticides cause the least suffering Currently it seems like Brian Tomasik & the Foundational Research Institute, and Sentience Politics, are paying some attention to considerations like this.
7[anonymous]
Are you familiar with the work of either Brain Tomasik or the Foundational Research Institute? Both take mass suffering very seriously. (Including that of insects, video game characters, and electrons. Well, sort of. I think the last two are just weird EV-things that result when you follow certain things to their logical conclusion, but I'm definitely not an expert.)
0Lumifer
Effective Altruism is not Animal Rights.
1MrMind
I think that for any sensible actions to be designed, you should also show if sufference is additive or not.
0Thomas
Every atom shall be used for the computronium, anyway. So there will be no (insect) pain anymore. We should be very careful what to upload then. But it's the EA, you are asking for. What their response should be? I have no idea. I don't see any use for this movement in this context. Or in almost any other context, too.

This is a cool idea! My intuition says you probably can't completely solve the normal control problem without training the system to become generally intelligent, but I'm not sure. Also, I was under the impression there is already a lot of work on this front from antivirus firms (i.e. spam filters, etc.)

Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.

2whpearson
It think there are different aspects of the normal control problem. Stopping it have malware that bumps it into desks is probably easier than stopping it have malware that exfiltrates sensitive data. But having a gradual progression and focusing on control seems like the safest way to build these things. All the advancements of spam filtering I've heard of recently have been about things like DKIM and DMARC. So not based on user feedback. I'm sure google does some things based on users clicking spam on mail, but it has not filtered into the outside world much. Most malware detection (AFAIK) is based on looking at the signatures of the binaries not on behaviour, to do that you would have to have some idea of what the user wants the system to do. I'll update the control of computers section to say I'm talking about subtler control than wiping/smashing hard disks and starting again. Thanks,

I'd like to see the end of state lotteries, although I know that's not gonna happen.

3gilch
There may be other approaches. A little searching reveals that six states don't have lotteries. And they have different reasons. Alabama, Mississippi, and Utah have long resisted due to religious objections. Spreading the Gospel may not be an approach we approve of, but it proves that cultures can develop immunity to certain common human failures. There have historically been successful efforts to shift culture via media and education. Designated drivers are a notable example. Perhaps something similar could work. Surprisingly, Nevada is one of the six, despite rampant legalized gambling. There's not enough cultural objection here. What there is, instead, is a big casino lobby that doesn't like competition. A well-funded, well-organized lobby can overrule an unorganized majority. The Prohibition is a notable example. A constitutional amendment would do the job, but we probably wouldn't need to go that far. The last two are Alaska and Hawaii. The reason for this is that they don't border other states. You see, the other Bible Belt states also resisted lotteries for a time, but when your citizens can just cross the border to a neighbor to get their tickets, then a very compelling argument arises in the state legislature: "If they're doing it anyway, shouldn't we get the tax money?". This caused a kind of domino effect and state lotteries proliferated. It also means that focusing on one state at a time is probably not going to work. I'm not sure how else this insight helps us.
0gilch
It's a tax on the mathematically challenged. The obvious path forward is better math education.

Haha, yea I agree there are some practical problems.

I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn't be trusting our intuitions on something like this.

1simon
If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that's just insanity. If the people going ahead with it think that they need to do it because that's the morally obligatory thing to do, then they're fanatic adherents of an insane moral system. It seems to me that you think that utilitarianism is just abstractly The Right Thing to Do, independently of practical problems, any intuitions to the contrary including your own, and all that. So, why do you think that?
0g_pepper
I don't see why not; after all, a person relies on his/her ethical intuitions when selecting a metaethical system like utilitarianism in the first place. Surely someone's ethical intuition regarding an idea like the one that you propose is at least as relevant as the ethical intuition that would lead a person to choose utilitarianism. I don't see why. It appears that you and simon agree that utilitarianism leads to the idea that creating utility monsters is a good idea. But whereas you conclude from your intuition that utilitarianism is correct that we should create utility monsters, simon argues from his intuition that creating a utility monster as you describe is a bad idea to the conclusion that utilitarianism is not a good metaethical system. It would appear that simon's reasoning mirrors your own. Like the saying goes - one persons's modus ponens in another person's modus tollens.
3simon
Because it doesn't seem right to me to create something that will kill off all of humanity even if it would have higher utility. There are (I feel confident enough to say) 7 billion plus of us actually existing people who are NOT OK with you building something to exterminate us, no matter how good it would feel about doing it. So, you claim you want to maximize utility, even if that means building something that will kill us all. I doubt that's really what you'd want if you thought it through. Most of the rest of us don't want that. But let's imagine you really do want that. Now let's imagine you try to go ahead anyway. Then some peasants show up at your Mad Science Laboratory with torches and pitchforks demanding you stop. What are you going to say to them?

I have said before that I think consciousness research is not getting enough attention in EA, and I want to add another argument for this claim:

Suppose we find compelling evidence that consciousness is merely "how information feels from the inside when it is being processed in certain complex ways", as Max Tegmark claims (and Dan Dennett and others agree). Then, I argue, we should be compelled from a utilitarian perspective to create a superintelligent AI that is provably conscious, regardless of whether it is safe, and regardless whether it kill... (read more)

0Viliam
Are we actually optimizing for "subjective happiness"? That's the wireheading scenario. I would say that wireheading humans seems better than killing humans and creating a wireheaded machine, but... both scenarios seem suboptimal. And if you instead want to make a machine that is much better at "human values" (not just "subjective happiness") than humans... I guess the tricky part is making the machine that is good at human values.
7simon
I would consider the option of creating a utility monster to be a reductio ad absurdum of utlitarianism.

No, at least not yet. That's a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don't know the legal details, though, I could be completely wrong.

2Lumifer
Facebook can filter the content, yes, but we're not discussing the legalities, we're discussing whether this is a good idea.

I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.

5ChristianKl
All of the information submitted to Wikileaks was real. Even if it came from Russia it was nothing to do with Fake News.
5lmn
You know, your campaign against fake news might be taken slightly more seriously if you didn't immediately follow it up by asserting a piece of fake news as fact.
3skeptical_lurker
I've just been skimming the wiki page on Russian involvement in the US election. The other claims seem to just be that there was Russian propaganda. If propaganda and possible spying counts as "war" then we will always be at war, because there is always propaganda (as if the US doesn't do the same thing!). The parallels with 1984 go without saying, but I really think that the risk of totalitarianism isn't Trump, its people overreacting to Trump. Also, there are similar allegations of corruption between Clinton and Saudi Arabia.
7Lumifer
You shouldn't uncritically ingest all the crap the media is feeding you. It's bad for your health. So we are at war with Russia? War serious enough to necessitate suspending the Constitution?

Wow, that had for some reason never crossed my mind. That's probably a very bad sign.

0ThoughtSpeed
Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they're interesting.
2gjm
Intent to kill!

Perhaps I was a bit misleading, but when I said the net utility of the Earth may be negative, I had in mind mostly fish and other animals that can feel pain. That was what Singer was talking about in the beginning essays. I am fairly certain net utility of humans is positive.

8gjm
If you think that (1) net utility of humans is positive and (2) net utility of all animals is negative, and you are minded to try to deal with this by mass-killing, why would you then propose wiping out all animals including humans rather than wiping out all animals other than humans? Or even some more carefully targetted option like wiping out a subset of animals, chosen to improve that negative net utility as much as possible? [EDITED to fix screwed-up formatting]
1Yosarian2
Ok, that's possible. I still don't think it's that likely, though. In general, at least from my limited experience with animals, most of them are pretty "happy/ content" most of the time (as much as that word can apply to most animals, so take it with a grain of salt), so long as they aren't starving and aren't in serious pain right at that moment in time. They do have other emotional responses, like anger or fear or pain, but those are only things that happen in special conditions. I think that's how evolution designed most animals; they're really only under "stress" a small percentage of the time, and an animal under "stress" 24/7 (like, say, an animal in an unhappy state of captivity) often develops health problems very quickly because that's not a natural state for them.

Thanks for your reply, username2. I am disheartened to see that "You're crazy" is still being used in the guise of a counterargument.

Why do you think the net utility of the world is either negative or undefined?

4username2
Interesting username. In all seriousness and with all good intent, I am quite serious when I say that thinking the world is without value is in fact a textbook symptom of depression. But I think you have chosen to ignore the larger point of my comment that morality is really self-determined. Saying that your personal morality leads to an assessment of net negative utility is saying that "my arbitrarily chosen utility function leads to not-useful outcomes." Well.. pick another.

Let me also add that while a sadist can parallelize torture, it's also possible to parallelize euphoria, so maybe that mitigates things to some extent.

0RedMan
'People in whereveristan are suffering, but we have plenty of wine to go around, so it is our sacred duty to get wicked drunk and party like crazy to ensure that the average human experience is totally freaking sweet.' Love it! This lovely scene from an anime is relevant, runs for about a minute: https://youtu.be/zhQqnR55nQE?t=21m20s

Quick note: I put a 1 for the driving question because I don't drive

1-necate-
Thank you for participating. You can also skip questions if they make no sense for you. I should have advised people to do that if they do not drive.

Can someone explain why UDT wasn't good enough? In what case does UDT fail? (Or is it just hard to approximate with algorithms)?

0dogiv
I've been trying to understand the differences between TDT, UDT, and FDT, but they are not clearly laid out in any one place. The blog post that went along with the FDT paper sheds a little bit of light on it--it says that FDT is a generalization of UDT intended to capture the shared aspects of several different versions of UDT while leaving out the philosophical assumptions that typically go along with it. That post also describes the key difference between TDT and UDT by saying that TDT "makes the mistake of conditioning on observations" which I think is a reference to Gary Drescher's objection that in some cases TDT would make you decide as if you can choose the output of a pre-defined mathematical operation that is not part of your decision algorithm. I am still working on understanding Wei Dai's UDT solution to that problem, but presumably FDT solves it in the same way.
0Stuart_Armstrong
I think it's essentially UDT, rephrased to be more similar to classical CDT and EDT.

So wait, why is FDT better than UDT? Are there situations where UDT fails?

0Stuart_Armstrong
My original post here is in error; see http://lesswrong.com/r/discussion/lw/orn/making_equilibrium_cdt_into_fdt_in_one_easy_step/ for a more correct version.
0Stuart_Armstrong
As I understand it, they're both the same except for the bits that haven't been fully fomalised yet (logical uncertainty...). But they are phrased differently, with FDT formulated much closer to classical decision theories.

Well, suppose it increases awareness of the threat of AGI, if we can prove that consciousness is not some mystical, supernatural phenomenon. Because it would be more clear that intelligence is just about information processing.

Furthermore, the ethical debate about creating artificial consciousness in a computer (mindcrime issues, etc.) would very shortly become a mainstream issue, I would imagine.

0ingive
I'm not sure if intelligence and consciousness are one and the same thing, and with your words, consciousness/intelligence is information processing. If you conclude that intelligence is information processing, then this might be an aspect of the body, an attribute, in roughly the same way as consciousness. Then that aspect of the body is evolving in machines, called artificial intelligence, independent of conscious experience. Consciousness has such a wide variety of states, whether it be mystical, religious experiences, persistent non-symbolic experiences, nonduality or even ordinary states and so forth. It's fine that these states are seen from the perspective of neurons firing in the brain, but from the state of the beholder, it's well, you know... maybe unsatisfactory to conclude the source is the brain? William A. Richards[1], for example, have the view that the 'hard problem of consciousness' is a philosophical question, and I don't doubt many others who have experienced these states have a more open appreciation for this idea. [4] But as a philosophical question, even with the assertion that consciousness is information processing, it could be this 'brain being a receiver or reducing valve' philosophical idea. Hence, creating conscious machines means inducing a reduction valve of Mind-At-Large or receiver, however you want to look at it. Recent neuroimaging studies have sparked the light of Aldous Huxley's philosophical idea[2] that the brain is a reducing valve for Mind-At-Large, consciousness, by showing that reductions in blood flow to certain regions of the brain with for example psychedelics lead to a more intense experience. Probably the most efficient way to accelerate neuroscience research is with AGI and I wouldn't be surprised if DeepMind's coming AGI will be utilized for this purpose as for example Hassabis is a neuroscientist and been a strong proponent for AGI scientists. [1] https://www.theguardian.com/books/2015/dec/07/william-a-richards-

Is neuroscience research underfunded? If so, I've been thinking more and more that trying to understand human consciousness has a huge expected value, and maybe EA should pay it more attention.

0MrMind
Does it? What do you think would be an expected return of the discovery of the precise mechanics of consciousness? Or what if neuroscience dissolves consciousness?

I read somewhere NK is collapsing, according to a top-level defector. Maybe it's best to wait things out.

Thanks for this topic! Stupid questions are my specialty, for better or worse.

1) Isn't cryonics extremely selfish? I mean, couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?

2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?

3) Is the study linking nut consumption to longevity found in the link below convincing?

http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094

And if so, is it worth a lot of effort promoting nut consumption in moderation?

0Viliam
If we are talking about "extremes", what is the base set here: people's usual spending habits? Because I don't think cryonics is more selfish than e.g. buying an expensive car.
0MrMind
Well, 'better' here does all the work. It depends on your model and ethics: for example if you think that resuscitation is probably nearer then full AGI, then it's better to be frozen. This question I couldn't parse correctly. A nuclear war is improbable to wipe out humanity in its entirety, whereby a lot of people is th exact opposite of extinction, so...? This is far from a stupid question. The sample sizes are at least large, but it has the usual problem of using p-values, which are notoriously very fragile. It would require someone acquainted with statistics to judge better the thing, if it can be done at all.
2Thomas
Here comes another "stupid question" from this one. Couldn't the money spent on AI safety research be better spend on, say, AI research?

Actually, as a tournament player I feel I can help explain the slowness:

The article suggests that this isn't due to increased computational speed or focus, but I think that's wrong. Playing slowly doesn't imply thinking slowly. In a chess game, you have a certain amount of time overall, and often when the position is very complicated players will spend half an hour delving into variations and sub-variations. If it's hard to concentrate, they may just rely on low-calc alternatives, and play faster.

0Elo
Agreed.

I'm not surprised. But I also don't see much utility from this study; most people already believe that coffee helps them focus.

4shev
Don't you think there's some value of doing a more controlled study of it?
0Benquo
Something about "makes play better but slower" feels especially persuasive to me.