I agree with A.Madden. If the question was phrased $10 trillion in physical wealth that didn't exist before, it would be different. I wouldn't trust myself to manage more than a few hundred billion, and I'd destroy the other $9.6 trillion. Maybe a $75000 investment trust for myself and about twice that for family and friends. Most of my investment strategies (Grahamian Value modified to account for future demographic, geopolitical, cultural and technological trends) breakdown at such high valuations. I like the CDI index and I like P.Martin's initiative to tie Canadian African foreign aid to those nations (Kenya gets bonus pts for denouncing Mugabe and South Africa and Zimbabwe would get nothing at present) that stamp out corruption. So maybe $40 billion to buy down the debt of said nations. I'd saturate research grants for things like apiculture, grain, and desalination research; maybe one billion would cover 10000 grants. But the real need is probably university research infrastructures, and I'd be worried about being stuck with the operating costs and the marginal research gains not being anything close to the research output now. I'd want to harmonize distance learning education globally, but that is dependant on accreditation and immigration reforms, and I don't think the real gains will accrue until holograms fill bandwidth. So for now I'd make strategic scientific journal sectors free. Would buy up mosquito nets and saturate microfinance penetration (dependant on training door-to-door bankers). Maybe $5 billion? Lots of think-tanks on various subjects. I wasted much time doing casual labour to make rent and may again in the future; no doubt there are millions of others. $2 billion would fund 1000 think-tanks that mimic university institutions without the need to force superfluous course material. I'd establish a $10 billion trust for grain storage GMO research, and physical pilot projects that mimic the UK's previous Intervention Storage project, but at a much higher tech level. I'd establish a $10 billion trust to accelerate the research into urban robotic greenhouses (inert gas greenhouse sheaths, robotic pruners, OLEDs, time-release fertilizers, GMOs). I'd short any oil and coal companies that have funded Neoconservative think-tanks or used lawyers that have previously defended the asbestos or tobacco industries. $25 billion. I'd pay 1/6 the value to developing nations of building wind turbines instead of coal, as wind company stock options towards their sovereign wealth funds. A $4 billion trust? Saturate IDE's drip irrigation and hand pump market; a $2 billion trust. $5 billion(?) to work with the big US banks to implement grain elevators and exchanges in the developing world, and novel metal commodity contracts in the developed world (semiconductors and polymer solar cells need metals like gallium). I'd match any CSA funding if they wanted to reinstate NASA's cancelled NIAC ($500 million). A trust to fight computer hackers (best way to fight AI threats from what I can tell), and sketch out the possibility of a low footprint gene sequencing and electronics market decades in the future, to fight designer pandemic and AI/AGI. $10 billion. Fighting pandemics appears to turn into fighting Staph infections, so $50 billion to give hospitals in the western world gelFAST alcohol rubs dispensors for their nurses and U of T nurse hand-wash sensors ($300 a bed) in all hospital beds. $100 billion for basic sewage, health, nutrition and education infrastructures in the developing world. Saturate Cuba's 3rd world doctor exporting programme, $500 million. A trust to GMO a crop containing all 8 essential amino acids. At $263 billion here. The problem is my investment strategy wouldn't work at such a high valuation, and I'd need to devote a lot of time to learn new strategies just to park and manage the trusts. Also, a lot of things I think are high ROEs are properly handled by governments. There is a $10 billion programme forwarded to fight mental health disorders in Canada, certainly useful for managing any futuristic technologies. I like building nursing homes and affordable pedestrian friendly housing, and I like rejigging agriculture tarriff/subsidy rates, but that is the job of government. Same for things like offering free bank accounts, the job of private industry. Most of the above can be distilled to university grants, and making university accessible to the third world and to first world adults. Many things I don't know enough about to fund. A $1 billion trust to buy light-weight solar cells and mini wind turbines for the developing world. $2 billion to cut Canada's boreal forest into pieces to fight Mountain Pine Beetle, if it eats Jack Pines. $100 million to fund solid-state hydrogen research. $900 million to buy up rainforests/wetlands in danger. A $1 billion prize trust to give annual awards to leaders that safeguard their own environmental capital. I'd want to buy a Zenn electric car, but they are illegal in Canada. $269 billion.
I'm glad to see this was going somewhere. I'd say yes, if humans have free will, than an AGI could too. If not on present semiconductor designs, than with some 1cc electrolyte solution or something. But free will without the human endocrine system isn't the type of definition most people mean when they envision free will. But I suppose a smart enough AGI could deduce and brute force it. Splitting off world-lines loses much of the fun without a mind, even if it can technically be called free will. I'd want to read some physics abstracts before commenting further about free will.
"Lets say we, as humans, placed some code on every server on the net that mimics a neuron. Is that going to become sentient? I have no idea. Probably not."
Ooo, even better, have the code recreate a really good hockey game. Have the code play the game in the demolished Winnipeg Arena, but make the sightlines better. And have the game between Russia and the Detroit Redwings. Have Datsyuk cloned and play for both teams. Of course, programs only affect the positions of silicon switches in a computer. To actually undemolish a construction site you need an actuator (magic) that affects the world outside the way lines of computer code flip silicon switches. The cloning the player part might be impossible, but at least it seems more reasonable than silicon switches that are conscious.
"No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not."
Once again, a straw man. Free will might not exist but it won't be disproved by this reasoning. People that claim free will don't claim 100% free will; actions like willing your own birth. Free will proponents generally believe the basis for free will is choosing from among two or more symbolic brain representations. If the person read a book about the pain of being burned to death, in the few seconds between past contemplating self and present decisive self, than the straw man holds.
In the above example, if the fear of fire is instinctive, no free will. If it is attained through symbolic contemplation *in the past* of what one would do in such a circumstance or how one values neighbourhood civilian lives, or one's desire to be a hero or celebrity, then at least the potential for free will exists.
Once again, free will does not mean willing your own existence, it means choosing from brain symbols in a way that affects your future (if free will exists). I expect to post the exact same argument here on different threads repeatedly ad neaseum, that free will does not mean willing your own birth (or willing your own present or future, or willing the universe).
I'll ask again, don't tachyons induce feedbacks that destroy the EY concept of a "block MWI universe"?
"...Also, your last two comments are almost completely off-topic."
I was just playing the Devil's Advocate, screwing around to "help" others build debating skills while not telling them I was wasting their time :)
"Tangential argument: existential risk maximizing actors, thank goodness, don't exist, nor do more than a tiny number of people seeking to destroy humanity. Beware the Angry Death Spiral."
I think I'll stand by my words and qualify the statement maybe GWB could start WWIII single-handedly and isn't, so this is only pertaining to the threat global warming. S.Harper couldn't be misplaying the threat worse. Canada's governing structure has a provision where the Queen of England is the real head of state, and the Governer General would almost certainly remove our PM from power if he did things like igniting Canada's coal reserves (a nice trick to have in the arsenal though, if the world is heading towards an Ice Age, as Ice Ages typically onset in decades or less). We are very early in on Global Warming. If GWB and S.Harper were acting as they are now, a decade or two from now, my correct position would be the mainstream. I didn't mean actively as in willfully, like Nazi evil. I meant it more like allowing a population to starve (literally in this context), like Soviet evil inflicted on the Ukraine. GWB and S.Harper know full well what they are doing is greedy and they both know enough or are purposely (as opposed to unintentionally) avoiding the knowledge. Yep, I stand by my statement. When history looks back, if we make it there, GWB and S.Harper will be looked back as being one of the world leaders of their respective nations ever, solely on the demerits of their handling of Global Warming. B.Obama and S.Dion, solely by coming after them with an environmental platform that doesn't threaten to destroy humanity for short-term profit, will go down in history as at least above average leaders.
Am I part of an angry death spiral? I think my comments are measured. Probably even kind. S.Harper's first act of government was to cancel 17 Canadian Global Warming research programmes, including a critical ocean one. Do I really need to post what dubya has done on this file? The angry death spiral only happens if Republicans and Conservatives maintain power over the years ahead. America finally cashes in its WWII credit and Canada temporarily loses post-modern status. Not a spiral. Yet.
"But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided." I think you have to qualify this statement with "unresolved" policy debates.
I'll take the positions: 1) another Holocaust would be a bad thing. 2) global warming is real and S.Harper and GWB are real existential risk maximizing actors. 3) the US prison economy (construction, staffing and forced prison labour), now consuming more resources than Universities in your retarded country, is a conflict of interest. It won't help students at all to adapt the opposite positions.
The problem with taking evil positions "just for kicks", is that many of these positions are adapted in real life. There are a powerful (low teens percentile) political minorities in Europe and Russia that wouldn't mind another Holocaust and would welcome more skeptical minds like EY to briefly adopt their positions. Same for oil supporters in Canada and the USA that presently run the world and are actively seek humanity's destruction. The USA incarcerates a greater % of its population than anyone; is practically a 3rd world country. Slavery is still alive in the USA.
"unresolved" turns the above brain sharpening positions into acceptable (but still false policy positions): 1) Immigration should be reduced or union jobs should be subsidized with public funds or cultural minorities should melting pot. 2) I'm greedy and would rather consume than stabilize Earth for future generations. 3) We need retarded Republican policies to try to maintain global military hedgemony, and the Republican alliance shouldn't be fractured; also, incarcerating Democrats prevents them from voting.
Don't encourage malleable students to adopt evil positions, they may like it.
The nature of time has been covered by many great minds from a religious viewpoint, as mentioned by nick. It is also an active research topic among mainstream universities. I'm not particularly interested in the question, but the best analysis I've read comes from a few N.Bostrom papers, and a book I once read called "Time Machines". The book supposes a block universe, but states very clearly that this may not be the way the universe operates. From what I understand, this means the opposite of what EY wrote. It means the Copenhagen determination (that magic causes wavefront collapses) is a block universe. From my understanding, MWI means the universe would only be deterministic if there were no tachyons (I'm not sure, but I think these are predicted in most GUTs), otherwise there would be feedbacks. Even if no tachyons, the universe would only be deterministic in a past direction. The real question is what causes universes to split off. This is deep physics. There are papers on this topic. If someone were to suggest one, I would read it. The whole point of Tipler's "The Physics of Immortality", was to use shearing forces in a collapsing universe (universe strongly appears to be open, unfortunately), as an energy source. Where would a never ending universe fit when viewed through block universe goggles? Once again I ask, don't tachyons eliminate the block universe concept for all energy except photons travelling at c?
I'm not discouraging discussion. But there are some topics where this may be a cutting edge dialectic, such as the nature of minds, the computational power limits if any to recursive AI software programs, and AGI/AI controls. But this debate is inferior to mainstream university research. Keep it up, but the real question is how much money to spend on particle accelrators and observatories, that might resolve these basic physics questions. The money people use mainstream physicists as their info sources. These mainstream physicists have written papers. If EY's "block universe" hypothesis were correct, we wouldn't experience time. Simple anthropic reasoning disproves it. Time exists. The future is more important than the past. If anyone takes the time to find papers that deal with splitting off universes, I'd attempt to read them and discuss. I hope if mildly recursive software AI systems are built in the decades ahead and the human brain/mind is modelled by IBM or whoever, that those interested here in AI/AGI will keep up with these findings and not continue to discuss "inferior" content. Maybe I'm just pissed because I realize blogs where GUT amateurs talk about time, have limits.
Off-topic, but I suggest EY's idea of an AGI using mixed chemicals to form a mobile robot (and assumedly hack the internet), is now dated. With rep-rap and ink jet polymers, rapid plastics prototyping...a far more likely scenario is that an AI would hack a printer and output some sort of shape-memory device or conducting plastic as an origami crane. Normally this is a moot point, but there may be real defenses that could be dreamed in these sorts of discussions. If it is not known whether AGI is possible with a 2000BC Egyptian wooden abacus, or needs a computer from 10000000AD, but we know people may try to use the same sort of technologies and/or hacking procedures as weapons, why not diversify one's fields-of-expertise? If I were to suggest AI/AGI prescriptions to cyber police, I'd suggest cracking down on Eastern European, Russian and Chinese virus writers and better funding the good guys.
(H.Finney wrote:) "But then, some philosophers have claimed that brains could perhaps influence quantum events, pointing to the supposed collapse of the wave function being caused by consciousness as precedent. And we all know how deep that rabbit hole goes."
How deep does it go? Penrose's (a physicist) quantum brain components (an aspect of neurobiology and philosophy of mind) don't seem to exist, but I had to dig up ideas like the "cemi field theory" on my own, in past discussions on this topic (which always degenrated to uploading for immortality and cryonics); they certainly weren't forwarded by free-will naysayer robots.
"(EY wrote:) If you're thinking about a world that could arise in a lawful way, but whose probability is a quadrillion to one, and something very pleasant or very awful is happening in this world... well, it does probably exist, if it is lawful. But you should try to release one quadrillionth as many neurotransmitters, in your reward centers or your aversive centers, so that you can weigh that world appropriately in your decisions. If you don't think you can do that... don't bother thinking about it."
What if it is a fifty-fifty decision? If I see a pretty girl who is a known head-case, I can try to make the neural connection of her image with my boobies-Marylin-Manson neuron. Once I start to use abstract concepts (encoded in a real brain) to control chemical squirts, I'm claiming the potential for some limited free will. I doubt there are any world-lines where a computer speaker materializes into my lungs, even though it is physically possible. But if I think I'd like to crush the speaker into my chest, it might happen. In fact, I'd bet world-lines split off so rarely, that there isn't a single world'line where I attack myslef with a computer speaker right now. Has anyone read recent papers describing what variables limit decoherence assuming MWI? To my knowledge, photon effects only demonstrate a "few" nearby photons in parralel worlds.
Don't faster-than-c solutions to general relavity destroy the concept of MWI as a block universe?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
...The think-tank money would include futurism like SIAI and this blog's topics. For longevity research, I think the best way to promote it might be to screen what health/pharma/biotech companies spend the most on R+D in relavent sub-fields. Money would only come in handy to market such a portfolio as "boomer-ethical". I'd want to give R.Freitas money to do diamond surface chemistry computer sims, but given that they come down in price every year I wouldn't be sure the optimal amount. Think-tanks is pretty vague. You'd want to look into the specifics of FDA approval processes in pursuit reform ideas; you could fund such a think-tank but the real bottleneck would be educating policy researchers. I'd think any university would respond favourably in instituting new research schools of any type, probably get matching government funding too. For example, W.Buffett had trouble finding cheap investments as soon as he had tens of billions to play with. The Provincial Government of Alberta couldn't figure out what to do with their oil revenues when approaching eleven digits of play money.