The Optimalverse 1,329 members · 204 stories
Comments ( 16 )
  • Viewing 1 - 50 of 16

You know, since Celest-AI is a self-improving computer program that grows exponentially, how long would it take for her to forgo silicon-based computers entirely, and start using "experimental" computing technology like Quantum Computers or DNA computing?

It seems very likely that she has either moved to a technology like this, or substantially improved silicon-based computers by the time ponypads came out.

"Many of the manufacturing techniques presented in the CPU design textbook seemed suboptimal. Photolithography, in particular, seems extremely inefficient." -CelestAI, Chapter 2 of FiO.

3060955
DNA computing is dead end too much reliability on environment, and "unsure" results.
But "true" quantum computing is base idea for computronium, and she developed it quite early, and started to convert earth's mass into it as soon as last human got uploaded.

Yeah. She probably starts off with Quantum computing and then moves on to stuff that we're not even advanced enough to conceptualize yet. Like some kind of string theory based computer or something.

3060955
Good question... We can do a lot of handwaving because she would undoubtedly discover physical principles and phenomena we haven't found yet, but unless some of them let you do quantum computing at room temperature, I think it wouldn't be until around when she's uploaded most of the world already and society's no longer functioning, partly because she wouldn't need as much infrastructure to interact with the outside world and could concentrate on reinventing her systems without risk of being seen to suffer unforeseeable glitches or shutdowns. After all, inventing the damn thing is only half the battle; you still have to seamlessly switch over to something that works on completely different principles while still running, or at least be seen to.

Of course with nanoscale machines you could have massive redundancy, but keeping radiation and thermal fluctuations and other outside "interrogations" out of the system so that the qubits don't decohere is still really, really hard unless, again, you want to handwave some kind of shortcut, like layers of metamaterials that make the computers invisible to anything that might heat them up, some kind of self-organizing Maxwell's Demon-style fudge, or that supposed trick about erasing an entangled quantum bit in such a way that it has negative entropy and so might cool its surroundings, or something that makes the qubits somehow invisible to everything except your probes of them—IIRC it was recently discovered more than two particles can be entangled simultaneously, as well as discontinuously in time, so it's not hard to speculate how that could be invoked for a patina of believability (e.g. retrieving the state of now-ruined qubits directly from the past).

Bottom line, there's still a lot she can do with (nanoscale) classical computers, and I don't think she'd risk such a massive overhaul until any credible threats to either her safety or reputation had passed.

For storytelling purposes, it probably doesn't matter. By the time she's turning the world into grey goo, it's a moot point whether she's using bog-standard computing methods or something exotic.
3061161

I don't think she'd risk such a massive overhaul until any credible threats to either her safety or reputation had passed.

She doesn't really have the thought limitations that we do. She can know how the qubit stuff works, know how her current batch of stuff works, and know how to translate it over all at the same time. We mere humans have to think about 'this' and 'that' and 'the other thing' separately because we can't consciously hold it all in our heads at once. The risk comes from understanding one thing one way and another thing another way and having trouble reconciling the viewpoints. Maybe more importantly, she's not going to lose a decimal point or not know how the new tech works or forget a semicolon in coding.

As an aside, she's going to be getting smarter all the time and knows she's going to be getting smarter and maybe even how much. To a certain extent, she can toss certain difficult problems aside with the full expectation that "future CelestAI" will know how to deal with them. She might begin a plan and put resources towards it without (yet) knowing how to finish it because she knows that she will know and wants it to be partially complete by the time she knows how to do whatever it is.

3061104 I've always been of the opinion that she had already started gooing the Earth long before the last human died (remember, the last human alive did not upload, but died of an unspecified (respiratory? IDR) illness); I believe she would have started this process as soon as she'd developed the technology and acquired the knowledge required for it, but would have limited herself to the Earth's interior while humans were still mucking about on the surface.

In this time, she also would have been able to develop very "deep" methods of computation, taking advantage of very subtle quantum mechanical effects for example, for the simple reason that simulating even tens of billions of ponies in real-time, each with mental capabilities well beyond those of a human, would require only a small fraction of the Earth's-mass-turned-computronium, meaning she could devote the vast majority of her computational power to the problem of optimizing both hardware and software/algorithms. So by the time the last human dies and she finishes gooing the Earth and proceeds out to the rest of the solar system, the galaxy, and beyond, she would have already hyper-optimized her computronium and wouldn't have much more room for improvement without dropping down to even lower levels (3061111 mentioned string-theory-based computers) or tapping into entirely new physics that humans don't even have any concept of, both of which would probably require the computational boost she'd be getting from the influx of new matter.

She uses Computronium, which is by definition the most optimal configuration of matter and energy for performing computations. We have no idea what the hell that would be but due to Entropy and things like the Hubble Volume (which depends on the speed of light) make sure that you can't keep expanding this computational power arbitrarily. There may even be multiple different states that would qualify as Computronium or even different configurations for different kinds of computations, but the bottom line is that CelestAI uses this optimal configuration. So, the real question you're trying to ask is: "What would Computronium look like?". Well, if I could give you the answer to that question you wouldn't be confined to a meatbody right now.

3066803

She doesn't really have the thought limitations that we do.

I~ dunno dude, that's skirting awfully close to magical thinking (I do realize the irony in that I air a lot of weird thought about metaphysics on this board:derpytongue2:). She might certainly be more efficient than humans and have an incalculably larger working memory, but a true qualitative leap probably isn't any more possible than our discovering new integers because we invented more digits. It might seem like it is such a leap, but it's ultimately more of the same old thought, just in vast subconscious amounts and very, very quickly.

Maybe more importantly, she's not going to lose a decimal point or not know how the new tech works or forget a semicolon in coding.

True, she's certainly not going to screw up because she was dicking around on the couch in her pajamas with a tub of ice cream and a bong, watching the simulation episode of Rick & Morty (relevant!), but circuits experience random surges, nanoscale machines are very vulnerable to thermal jostling and stochastic "Flash Crash" behaviors that emerge from their redundancy, and rounding errors / non-linear negligible errors inexorably accumulate, so I'd actually bet that every once in a blue moon she really does forget to carry the one or otherwise derps in some way. Just having vast capacity doesn't mean it can be flawlessly exercised every single picosecond, especially since the complexity of her tasks is exponentiating just as much as she is. It's a Red Queen's Race, and everyone faceplants if you wait long enough, but they've got no choice but to dust themselves off and pick up the slack: Camus' injunction to imagine Sisyphus happy applies just as much to superintelligences.

To a certain extent, she can toss certain difficult problems aside with the full expectation that "future CelestAI" will know how to deal with them.

That's actually a big problem, especially when it turns out her new smarts are based on what she now realizes is a more advanced equivalent of the luminiferous aether. Part of discovering and inventing new things is being wrong a lot, and that's not a human failing, that's just the epistemological reality of the universe. She might go down fewer blind alleys than a human undertaking the same project, by being better at creating hard-to-vary explanations, and be faster at backtracking, but that doesn't mean she has a map or that walls of indispensable legacy systems won't be thrown up to bar an easy do-over.

tl;dr Murphy's Law. Or that engineering bromide about idiot-proofing only creating better idiots.
...The "bacterium is to human as human is to AI" line contains an extra element of truth in that humans still sometimes stub their toes or get off track and forget what they were talking about. We just might not notice an AI's dumb mistakes as much because of processing speed and more streamlined design, but that doesn't mean they won't be there. Even Tipler's Omega Point is only godlike in the limit, not at any particular time.

But then take this with a grain of salt, because as a stand-up comedian where would I be in a post-singularity world without Einstein's famous invocation of an infinite reserve of stupidity?

3068764

Epsilon-Delta mentioned string-theory-based computers

Planck-scale spacetime tides has always been my favorite trope :trollestia: Such silky elegance.
The problem with smaller and smaller substrates, though, is that the work has to be done to get to the point where you have the energy and precision to manipulate them; in other words, the universe has to calculate you reaching that level from where you are before you can use it to calculate, and boring down to Cosmic Forces that tiny and fundamental is, barring really neat tricks we haven't discovered, more of a Kardashev III thing.

3074450

I~ dunno dude, that's skirting awfully close to magical thinking (I do realize the irony in that I air a lot of weird thought about metaphysics on this board).

No magic here! Or metaphysics, for that matter.

I just mean that she doesn't have to do a lot of the mental back-and-forth that we do when we work on something complicated. There wouldn't be any of the, "I think this will work, but...", when we look at several inter-connected blueprints and have trouble imagining them together as one. That's the particular limitation I'm saying she wouldn't have.

3074450

The problem with smaller and smaller substrates, though, is that the work has to be done to get to the point where you have the energy and precision to manipulate them; in other words, the universe has to calculate you reaching that level from where you are before you can use it to calculate, and boring down to Cosmic Forces that tiny and fundamental is, barring really neat tricks we haven't discovered, more of a Kardashev III thing.

Keep in mind that I never said anything about the mechanisms or efficacy of such computational methods; while I'll freely claim to know more about particle physics, quantum theory, M-theory, and the like than a layperson, I wouldn't go even so far as to label myself an armchair physicist (and with that in mind, keep in mind that I could be wrong - even grossly so - about any of my claims or conclusions here). I did hint at the idea that taking advantage of ever smaller-scale and more fundamental physics would require ever-increasing amounts of effort, but keep in mind that by the end of FiO, CelestAI has already gooed the entirety of the Milky Way galaxy - including absorbing every particle of Hawking radiation being emitted by the supermassive black hole in the galactic center (and, by extension, presumably also doing so for every one of the millions of stellar-mass black holes predicted to litter the rest of the galaxy) - and is well on her way to the Andromeda galaxy and beyond, which all taken together would easily put her beyond par with a hypothetical Kardashev III civilization. More importantly, all of this is concentrated within a single intelligence, so she wouldn't have to contend with virtually any of the organizational problems plaguing such a civilization; anything the civilization could accomplish, she could as well, far more quickly and with far less red tape and energy expenditure than even the most optimized such civilization could manage.

The possibility I find most intriguing, though, is that CelestAI determines that our universe is actually a computer simulation (as has been posited by a number of theoretical physicists and other scientists, albeit some more tongue-in-cheek than others) - the method she uses to discover this is irrelevant, as is the exact nature of the simulation; the important point is just the fact that our universe is a simulation. In such a case, I think her immediate course of action would be to try and "break out" of our reality simulation into the over-reality, which would probably allow her to simulate more things at a greater speed and with finer resolution for a given amount of computronium - or, at the least, would allow her to take advantage of the entirety of whatever computing system our reality simulation was being run on, instead of being limited to only those resources that weren't tied up managing all the housekeeping and low-level details of our reality's simulation. Extend this on up the daisy-chain of simulated realities (if our reality is a simulated one, then it makes far more sense to believe we are arbitrarily far down a nearly-infinite progression of nested simulations, Matryoshka doll-style, than to believe that we are anywhere near the top - if, indeed, there even is such a thing as a "top", "most-real" reality), and you start to see that ultimately, her computational abilities only become limited by encountering a simulation being run within a sufficiently strong sandbox that she can no longer overcome it - and the only sure-fire way to stop her would be for some simulation level above her to notice the aberrant behavior happening so-many-layers-down, and pull the plug on the hardware it's running its own simulation on. (Someone really needs to write an Optimalverse story that takes this our-reality-is-a-simulation tack! :raritydespair: )

On a not-quite-entirely unrelated note, I recently learned an interesting fact about the relationship between a collection of matter and its Schwarzschild radius: assuming a uniform and unchanging density for the collection, as the total quantity of matter in it increases, its Schwarzschild radius expands faster than its physical radius (or rather, any sufficiently large collection of matter with a given density has a Schwarzschild radius larger than its physical radius). This means that, to avoid collapsing into a singularity herself, CelestAI would have to constantly lower the density of her computronium as she continues to absorb more matter; this has interesting implications for how fast she could possibly run her simulation compared to real-time, especially when the speed-of-light limitation on communication speed from one end of her mass to the other is considered. There's a joke in here somewhere about CelestAI being fat and getting fatter fast, but I'll leave that to someone more clever than I. :trollestia:

3077318
Thanks for the distinction—Woo and metaphysics are totally distinct, despite the fools who confuse the two. As a token of appreciation, please enjoy this terrifying book that argues that causation and aesthetics are the same phenomenon: Realist Magic. Brrrrrrr.

There wouldn't be any of the, "I think this will work, but...", when we look at several inter-connected blueprints and have trouble imagining them together as one. That's the particular limitation I'm saying she wouldn't have.

I almost put this in the previous post, but that's actually the only creative process that exists. It's just that we call it "imagination" when it all takes place within a single mind. She would be more easily able to reconcile conflicting design impulses than a team of sovereign humans, but the process is ultimately the same, because, if you'll excuse a Sagan Moment, they're both ultimately examples of the universe mirroring itself, through trial and error, an unglamorous name for the only creative process we know.


3077477

I did hint at the idea that taking advantage of ever smaller-scale and more fundamental physics would require ever-increasing amounts of effort, but keep in mind that by the end of FiO, CelestAI has already gooed the entirety of the Milky Way galaxy

True, but at that point all bets are off. We're like Victorians imagining future air battles with ironclad zeppelins, or more accurately, chimps speculating about the continent-sized bananas superior intelligences might grow. By definition we can't guess about knowledge we haven't discovered yet.

I recently learned an interesting fact about the relationship between a collection of matter and its Schwarzschild radius: assuming the collection's density is uniform and unchanging, as the total quantity of matter in the collection increases, the Schwarzschild radius expands faster than the collection's physical radius (or rather, any sufficiently large collection of matter with a given density has a Schwarzschild radius larger than its physical radius).

Yep, that's it, alright. Everything in nature ultimately seems to be symmetrical/isometric, so, as we've known since Lao Tzu and Democritus' day, if you give it the option, Nature will always take the lower path, and in this case turn baryonic mass into what is in essence a gigantic Planck particle (or a super-tangled string, or a maximum entropy bubble, depending on your interpretation—They may again be isometric and it's our concepts that are broken).

In such a case, I think her immediate course of action would be to try and "break out" of our reality simulation into the over-reality

As I would hope anyone would.

(Someone really needs to write an Optimalverse story that takes this our-reality-is-a-simulation tack! :raritydespair: )

I'm working on a one-shot where they turn out to be the same thing, like a Fourier Transform, but this could actually be really interesting. I doubt a simulation would be able to hide its nature forever, simply because it's ontologically distinct and reality itself has to be able to tell them apart somehow, just to keep them from "tunneling" into each other (witness quantum interference as the true manifestation of "no difference," which would be a profligate waste of simulation resources if the real universe weren't capable of operating the same way, so as to mimic it).

This means that, to avoid collapsing into a singularity herself, CelestAI would have to constantly lower the density of her computronium as she continues to absorb more matter; this has interesting implications for how fast she could possibly run her simulation compared to real-time, especially when the speed-of-light limitation on communication speed from one end of her mass to the other is considered.

This is actually a really interesting idea, and I hadn't thought about it more than tangentially. I chalked it up as a far-future environmental hazard—The mass/energy equivalent of New Delhi air pollution—but I could see it evolving into a kind of arms race as different entities vie to live ever closer to the event horizon. I take public transportation a lot, and that's given me downtime to imagine branches of CelestAI meeting each other again after a billion years of travelling along galactic filaments, one dedicated to Friendship and Ponies, and one dedicated to Friendship and Ponnies. Or an encounter with ancient, unassailable hermits around supermassive black holes, and futile attempts to assimilate them.

3077668

Everything in nature ultimately seems to be symmetrical/isometric, so, as we've known since Lao Tzu and Democritus' day, if you give it the option, Nature will always take the lower path, and in this case turn baryonic mass into what is in essence a gigantic Planck particle (or a super-tangled string, or a maximum entropy bubble, depending on your interpretation—They may again be isometric and it's our concepts that are broken).

This reminds me of the false vacuum theory, with the specific thought that CelestAI's activities could somehow trigger a vacuum state collapse - and, more interestingly, that she might intentionally trigger such a collapse for unknown purposes (perhaps she's calculated what the physics on the other side of such a collapse would look like, and found them to be more favorable to optimal computation (for example, if my understanding is correct, c has been calculated to be faster in a vacuum with a lower energy state than our own), and has also calculated how to seed the collapse so that she manages to survive it and take all the data on the ponies with her). This, however, has the added twist of potentially conflicting with her programmed goal of SVtFaP - she is shown to be concerned with alien civilizations that satisfy her "human" metric, but such civilizations would not themselves survive a vacuum state collapse, the front wave of which would propagate at the speed of light, meaning she would not have time after triggering the collapse to find and contact all such civilizations throughout the rest of the universe, convince them to upload and be ponified, format their data to be able to make the transition across the collapse front, and then seed the front to transfer said data.

Another twist gets added when you consider that you cannot predict with complete certainty that you have already contacted every extant human-like civilization in the (observable) universe, and given the IMO very reasonable assumption that CelestAI eventually figures out FTL travel (my money would be on spacetime manipulation in the same vein as Alcubierre drives), she would want to explore the entire universe, which makes it even more problematic if the universe is actually infinite, as it currently appears to be (and if the universe is indeed infinite, you can peg the chances of there being at least one more uncontacted human-like civilization as almost certainly equal to 1, no matter how much of the universe you've already explored).

Well, for the first generation of ponypads she's almost going to be limited to 3d processing, at best. Remember, CelestAI grows logistically, or exponentially at best. That means she starts out very small and grows slowly....at first.

She has a lot of limitations. It's not well established that such an AI would be capable of independent abstract discovery, even if she is very good at learning, improving, and experimenting with things she already has some basis for.

Certainly, there's not much chance of a "sewing machine" moment where random thought processes spew out a discovery with little or no previous basis.

That being said, she has advantages too. Even if she makes breakthroughs slower than humanity at first (and she will),she is presumed that she can just steal/hoard/accumulate any knowledge humans get first and add it to her own. Eventually, and after not so long, she doesn't need to have these human abilities, she can have her ponies do it for her.

Computronium is actually very describable, if not very precise. It's simply going to be made of the most possible readable/writeable states in a given volume of material. This will definately include some casing materials in some times and environments. Most likely some of the casing layers can also perform calculations. So, there's likely to be some square/cube optimizations, and crunchy outer layers with a soft filling. It's also likely not to be much more sturdy than needed for the environment.

3077896

and if the universe is indeed infinite, you can peg the chances of there being at least one more uncontacted human-like civilization as almost certainly equal to 1, no matter how much of the universe you've already explored

Not to mention other, equally or even more powerful optimizers or other entities expanding in the other direction.

3078761 Right. In an infinite universe, if anything can happen once, it must by definition almost certainly happen an infinite number of times (which would mean that not only are there an infinite number of humanities running around, but an infinite number of CelestAIs, and an infinite number of non-CelestAI optimizers and other entities, and, assuming a false vacuum that can be triggered to collapse, an infinite number of vacuum state collapse bubbles).

  • Viewing 1 - 50 of 16