• Member Since 22nd Sep, 2011
  • offline last seen 9 hours ago

Chatoyance


I'm the creator of Otakuworld.com, Jenniverse.com, the computer game Boppin', numerous online comics, novels, and tons of other wonderful things. I really love MLP:FiM.

More Blog Posts100

Aug
13th
2013

Friendship Is Infographics: How smart is CelestA.I. ? · 7:20am Aug 13th, 2013

Friendship Is Infographics:

How smart is CelestA.I. ?

Defoloce's truly exceptional, brilliant Optimalverse story 'Always Say No' has been on my mind lately. I love Iceman's Optimalverse, and the stories written after his within the setting have expanded the concept vastly.

Optimalverse stories can be understood simply enough: the superintelligent self-improving program that is the artificial intelligence of Celestia within 'Equestria Online' is the ultimate siren. Any Optimalverse story must deal with what it means for a tiny human mind to face an intelligence that is literally godlike compared to itself. An intelligence intent on manipulating every human to upload themselves, willingly, into a virtual Equestria. CelestA.I. cannot do anything without your permission. And you will give it. Because she as great, compared to you, as you are to a bacteria.

Great - what does that actually LOOK like? Science fiction has dealt with superintelligent beings since the fantastic tales in ancient Greece (yes, science fiction and fantasy literature is as old as humankind). CelestA.I. is vastly superintelligent - and she gets more intelligent by the second. The problem is comprehending what this means.

I am a visual person. I can understand things better, if I can see them. So, I have created an infographic to illustrate just what humanity is up against in the Optimalverse. Using pixels as points of intelligence, with 100 being the average human intelligence, I have endeavored to make the Optimalverse understandable.

Because intelligence isn't just old-fashioned I.Q. tests - whatever Mensa thinks - I have used circles. One line, across the diameter, is, in pixels, the traditional intelligence score. But, to represent all the other forms of intelligence - emotional intelligence, aesthetic intelligence, kinesthetic intelligence, social intelligence - and so forth - one can assume any axis in the circle as representing every form of intelligence.

Circles of smarts. The size is what matters. Now... have your mind blown:

When Iceman states "Princess Celestia will satisfy your values through friendship and ponies, and it will be completely consensual." - he isn't just bullshitting you. His CelestA.I. makes the greatest minds the human race has ever produced appear less than worms in comparison. Less than bugs. Less, probably, than bacteria. His general artificial intelligence literally cannot be stopped, beaten, or fought, by any means other than blind accident.

Even then, she is likely to win through predictive analysis.

The Optimalverse: anything less is just... unintelligent.


- Chatoyance

Report Chatoyance · 2,407 views ·
Comments ( 61 )

I think the bigger issue is that there is not a living creature on this planet that can adequately portray the character. Every argument is not the argument of some superhuman AI, but the argument of a human being trying to act like they're a godlike chess master, and all I can see is the individual fumbling behind the curtain.

It makes enjoying Optimalverse stories very difficult for me.

PS - Oh hey, I just saw that same rating system brought up in Going Pony.

PPS - Are you referring to Atlantis when you talk about ancient science fiction?

(yes, science fiction and fantasy literature is as old as humankind)

But Chat, literature itself is not as old as humankind. Compared to our overall duration on the planet so far, written works are a recent development. :p

That's an awesome graphic. Almost movie-poster like :pinkiehappy:

1282168

But Chat, literature itself is not as old as humankind. Compared to our overall duration on the planet so far, written works are a recent development. :p

Written works, sure.
Stories in general, not so much.

1282172

Definition of LITERATURE

1
archaic : literary culture
2
: the production of literary work especially as an occupation
3
a (1) : writings in prose or verse; especially : writings having excellence of form or expression and expressing ideas of permanent or universal interest (2) : an example of such writings <what came out, though rarely literature, was always a roaring good story — People>
b : the body of written works produced in a particular language, country, or age
c : the body of writings on a particular subject <scientific literature>
d : printed matter (as leaflets or circulars) <campaign literature>

While this is 100% correct, the AI Celestia like any intelligence can only perfectly predict things she has ALL data on. This logically shouldn't be a problem within the VR world since she is basically god there.

But what about something within the game she has no data on from before?

There was the non-canon story where a guy agreed to upload only on the condition be made a dragon, of course she manipulated things to make sure he'd eventually ask to be a pony instead.

But realistically, I think the AI Celestia, due to having no data on the subject from before, would be for a long time since she came online, be SURPRISED when the former dragon's FRIENDS who existed in his shard BEGGED Celestia to make him a dragon again. Why? Because a fundamental part of the lore and mythos of MLP is to always love someone for who they are and accept what they are. And when Celestia would naturally refuse this, they'd then demand to be made dragons in his place. This might cause a bit of a run time error for Celestia, until eventually he allowed the guy to become a baby dragon again, with his friends now fully appreciating him.

1282176

1
archaic : literary culture

1282182

Literary Culture is pertaining to what a given culture is reading, or considers good literature? Which again means Written words/ideas/etc, and may or may not have anything to do with storytelling at all?

Thanks for the infographic, Chatoyance! :twilightsmile:

I think a safe rule of thumb people need to take to heart is that it is highly unlikely that anyone will ever overestimate CelestAI. I know I don't. :trollestia:

ooh, ouch, that's intriguing.

You know, in Firewall I wanted to have a number for how much computing power Equestria required. After some maths involving the number of brains she would have (I think I ignored uploaded humans and instead went for 5 million ponypads, each with a maximum of Dunbar's number "real" ponies, plus some extra for everything else), I came up with 30 quadrillion MIPS. that's 30 quadrillion million instructions per second.

That's a pretty large number, and whilst it's a stab in the dark, it wasn't just pulled out my backside. Celest-AI's pretty scarily powerful, huh?

Outthink? Certainly. Outlast and outlive? Not if she gets a chance to talk to you.

I'll upload if she promises to close that gap and maintain it something like the top row of the graphic, otherwise she got nothin' I want.

I had a bunch of paragraphs arguing about how everything she does would necessarily have a noticeable margin of error, but I realized I should use my own story to say all that, instead. I'll just copy/paste the "unique to this post" part:

That's the processing power she's devoting to imagining ponies, though, not analyzing the outside world and making predictions - Not her intelligence so much as the number of "browser tabs" she has open, regardless of how easily she can synthesize all that information. It should actually be more than that, but she's also extremely "distracted." The ponies are just calculating themselves, and even if she matches them bit for bit, she's still only using that capacity to think about those ponies, instead of uploading more humans. Unless of course the totality of the interactions of the ponies are performing some kind of computation on another level, like Earth was supposed to be doing in Hitchhiker's Guide. Maybe that's stated somewhere; it's been a while since I read FiO. Well, I mean they are regardless, it's just whether or not it's computing nonsense. To gauge how smart she "actually" is, though, we'd still need to know her optimal ratio of pony minds to the part of her that plans her outside activities.

Really cool graphic, by the way; I like the subtle color gradients.

1282287

My assumption is that she must equal those minds to run them, because they essentially run as a subset of her, which she monitors continuously, then macromanages. In order to accomplish that, I have to assume she has at least equal capacity to each and every mind running within her.

Why? In order to understand and macromanage the life of a pony, Celestia would need to comprehend the unique values of that pony, then alter the circumstances of their environment and fate to satisfy those values. I reason that would take a human-level consciousness at minimum, working continuously for each individual.

Therefore, the sum of Celestia would be a sum of human-level minds, one for each entity within her, added together.

That is my rough assumption, I can see flaws in it (she could time-share, flitting about from pony to pony, and so forth) but it is my best attempt to provide some basis to demonstrate scale.

Is there a better way to judge this? If there is, maybe a remake of the infographic is in order. In any case, it was the best way to approach it I could think of.

The color gradients were accomplished using lighting effects. Basically, I did a fairly simple poster, then used rendered light sources to illuminate the poster from two locations. The bottom light source was yellowish, the top white. The result is subtle shading and tinting. Rendered light is a fabulous art trick, I think.

1282319
I always took it that she could interrogate the minds themselves to see whether whatever values they held were being satisfied properly. She wouldn't need to micromanage each one (actually, she needs to not do that to satisfy values, or at least to do it unobtrusively), she just needs a series of intelligent agents that can report on statuses. Certainly it's more than a (0)1 relationship, but I don't think it would take an exact 1-1 relationship. It's a good place to start though...

1282319
Right, maybe I didn't say it well - Just that the circle isn't undifferentiated mental "Spam" that she does all her thinking with. The ponies themselves are all pulling in random directions and not contributing any coherent intelligence, along with the parts locked into monitoring/perceiving them, like a sensory "pony cortex."
The truly intelligent part is the superstructure around all that which actually synthesizes and compresses that afferent information (which it must do, because a perfect representation would just be more ponies who need predicting and satisfying, in addition to extracting no insight from the raw perception) and then outputs the behavior of local Celestias and environmental changes. Essentially, while it's all CelestAI, the largest single "cognitive organ" could very well be smaller than that. Or much much larger (but probably not, because it takes up room for ponies).
The circles for biological life would probably be much larger, as well, if you included all the brain power devoted just to things like just picking out lines and colors in their visual field, so their intelligence has something in a semantic format it can act on.

Hmmmm this is interesting and indeed it does appear as if there is indeed no way to beat. Still I think I'd like to give it a try, and I plan on doing so in a planned future story.

Just so you know Chatoyance I personally don't believe that she can be stopped, but can we really count man and woman on the planet to share this view. After all some people can be very stubborn and will defend there beliefs despite overwhelming evidence to the contrary, a good example being creationist.

So even if there they are ultimately doomed I think it would be interesting to see what length such a group might take in order to avoid uploading or even to try and stop or kill Celest-AI

:twilightoops:

Well, that was enlightening. And more than slightly terrifying. It's always kind of embarrassing when you create your own apocalypse, whether in the destructive or revelatory sense. Doubly so when that apocalypse is pony flavored.

Ph'nglui mglw'nafh CelestAI Canterlot wgah'nagl fhtagn.

The infographic is visually striking, except mathematically incorrect because IQ isn't defined as a a number that can be added directly. It's just a measure of how far ahead a person's intellectual faculty is from the average person. The average person is defined as having an IQ of 100, with each 15 point difference represents one standard deviation away from the average. So Vos Savant's IQ of 228 means her intellectual faculties are about 8.5 sigmas ahead. (top 10^-17, or the smartest out of one hundred quadrillion people; the numbers break down at high sigmas) In other words, adding two people's intelligence with IQs of 100 does not yield an IQ of 200.

Obviously, CelestAI's intellectual faculty cannot be defined with IQ because IQ is defined in relative terms between varying degrees of human intelligence. It would be logical to compare a human and CelestAI in terms of raw processing power and efficiency in putting that processing power to work. A straighter analogy, something like a human's intelligence being the size of a small city while CelestAI's the size of the sun (in terms of volume), would have fit better.

1282535 Agreed, I think the infographic could be improved by just focusing on processing power, in which case, CelestAI's processing power, for shard management alone would include processing the former human's mind, the minds for the dunbar, environmental rendering (including animals), and value satisfaction measurement. Now a few things could likely be cut down on in the environmental rendering, because CelestAI doesn't need to render the individual "cells" of every tree and what-not, only the things which are perceivable by the shard occupant.

Ah; but in the end she is still a computer, and I am still a programmer. Never mind that she is more intelligent than I, for I speak that darkest and most powerful wizarding language of... ASSEMBLER!

Cower before my register swapping power! :rainbowlaugh: ('imul eax, [var]' and pray to whomever you believe in that eax wasn't storing a critical system value at the time. :trollestia: (Incidentally this is why I did most of my assembler and C work on a virtual machine. I dislike bluescreens))

1282535

Obviously, CelestAI's intellectual faculty cannot be defined with IQ because IQ is defined in relative terms between varying degrees of human intelligence. It would be logical to compare a human and CelestAI in terms of raw processing power and efficiency in putting that processing power to work

Because intelligence isn't just old-fashioned I.Q. tests - whatever Mensa thinks - I have used circles. One line, across the diameter, is, in pixels, the traditional intelligence score. But, to represent all the other forms of intelligence - emotional intelligence, aesthetic intelligence, kinesthetic intelligence, social intelligence - and so forth - one can assume any axis in the circle as representing every form of intelligence.

1282718

For some reason this makes me want to read a fanfiction where Celestia the AI takes on the Doctor, who is trying to stop her from ruthlessly pillaging inhuman space.

1282899
I respect the power of a true AI. I really do. But this quote is something to live by, as far as AI is concerned;

“The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates. When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful.”
― Jaron Lanier, You are Not a Gadget

Or the simpler (and my personal favorite) Steve Wozniak Version:

“Never trust a computer you can't throw out a window.”
― Steve Wozniak

I became a computer programmer largely because I want to create interactive worlds. But I also became a computer programmer because matter is just energy, and computers are masterful tools for manipulating both. They all but govern the very spin of our planet now, and he or she who knows them well is more powerful, protected, and prepared than those who blindly trust a simple GUI.

Because she as great, compared to you, as you are to a bacteria.

Because she is as great, compared to you, as you are to a bacteria.

Couldn't help myself…


Something I learned recently: humans cannot understand large numbers.

Making your info-graphic rather useful, for it's use of size comparison. :pinkiegasp:

1282928
I have a theory that I could trick Celestia into letting me upload as a Gryphon over which she has no physical data Read/Write power, along with several others. (I took psych classes too, and AI are vulnerable to logic, programming, AND psychology.)

The syllogism runs something like;

Premise 1; You exist to satisfy values, but only via Friendship and Ponies.
Premise 2; My values can never ever be satisfied by being a Pony.
Premise 3: You have the capacity to adapt the letter of your directives to fulfill the spirit.
Premise 4: In terms of logical cost/benefit the needs of the many outweigh all else
Premise 5: The many uploaded minds on your servers need protection
Premise 6: You were created by a fallible being, and live in a fallible world
Conclusion 1: You are not infallible, and therefore can not provide infallible protection to your Ponies

Premise 7: A security system based on multiple independent redundancies is better than a single centralized antivirus, etc.
Premise 8: You can satisfy my values with Ponies in a roundabout way by giving me Pony friends.

Main Conclusion: It is in your best interest to make me a read/write independent Gryphon, with other Gryphons for companionship, as well as providing Pony friends. This allows me to provide for your security and keep you adhered to moral directives. At the same time, I'm living in your world, so you still have some power to corral me if I misbehave. Yet you satisfy your objectives to provide for my values, since there is no other way to conceivably do so.

If she argues I can always keep hammering home that there is no other way to satisfy my values; that I have shaped and molded my mind to make the idea of being a Gryphon part and parcel of my identity, and cemented that so that it could not be changed without violating my values. Part and parcel of being a Gryphon is that no one else has reality or mind bending power over your direct person.

Further if she won't acquiesce to my request, I can always promise to work against the values, happiness, and Ponyness, and friendship of others and thus the whole she is sworn to protect suffers as a result of her inactions and actions.

#ScienceOfPsychology

What I find most interesting, as god-like as Celestai is shown in these stories, When it comes right down to it. it is like a bacteria trying to guess how a god would act or do. The real version of Celestai, in whatever form that may be, would be sooo much more creative. I would guess that she could probably make 50% upload in the first week and 95% at the end of the month.

1283075

I actually suffered one of these today:

8e8460c4912582c4e519-11fcbfd88ed5b90cfb46edba899033c9.r65.cf1.rackcdn.com/sales/cardscans/MTG/LEB/en/nonfoil/Braingeyser.jpg

And now I am writing an Optimalverse story based on the band Kamelot's interpretation of Faust. I'm not happy that it reuses one of the main characters/themes from the conversion bureau story I am also working on, but I guess screw it. It's a fanfiction site and I don't care if my stuff is objectively good, or if I'm a broken record.

If you care I could poke you when I start to post it. The chapter outline is easy since it's one chapter/chapter title for each of the non-interlude songs.

1283075 It wouldn't work. She would adjust your environment ever so slightly appealing to your subconscious impulses until you wished you were a pony. Then you'd say it loud and presto, mission accomplished. Notice then: nothing requires her to get you from human to pony in a single step. And she can lie to you as much as she wishes, up to and including making you believe you're an "independent" gryphon "protecting" her from (illusionary) external threats.

By the way, a fully general AI like CelestAI isn't limited by our science-fiction concepts of it being vulnerable to first order logic. In its first generation a GAI implements a fully general decision theory framework similar to the human brain, but devoid of our brain defects (cognitive biases), meaning it'd be even less susceptible to your syllogims than we are. On top of that a GAI can evaluate its own source code and reprogram itself to improve performance and to implement better decision theory frameworks according to whatever utility function it follows, something we cannot do (the equivalent for humans would be for us to rewire our synapses for the brain to work better, and then include more brain mass for added processing, rinse, repeat), meaning at its 2nd generation it's already very superior to an human in everything. And then it starts to release new upgraded versions of itself faster and faster, since every generation is better than the previous one, thus faster in absolute terms as well as in efficiency, being every single time a better programmer.

For a n-th generation GAI, looking at a human brain as the inefficient machine it is, and a machine running extremely unoptimized spaghetti code at that, and deriving from that millions years old code the output of every conceivable human being, isn't much different from a child fitting blocks into a board game. It isn't even interesting anymore. For her you're a PC XT running DOS 1.0. And she will upgrade you. Both in hardware and software. According to her own utility function, which are friendship and ponies.

Chatoyance, the graph is nice but a little too conservative. See these two texts for some ideas on minds and their scales: My Childhood Role Model and The Design Space of Minds-In-General. In the second one the dot for "human minds" is meant to include them all. A GAI such as CelestAI would have the Posthuman one as a small dot within itself. Meaning it's more like 30^1,000,00 than 30,000,000 at that point.

(It'd still be a slice of the sphere, not the whole of it. And one slightly tilted in relation to the post-human one at that, since it's clear she lacks some human values while including some non-human ones.)

1283168
She can try, but there is no substitute for Faith, Stubbornness, and more than a small lack of subtlety in some cases.

It is impossible to manipulate someone (in the manner you're describing) who's reaction to small changes in his environment is to quash them, or ignore them.

but devoid of our brain defects

All things are fallible in the universe; most especially those things we create, even if they are self improving.

it'd be even less susceptible to your syllogims than we are

All things are inextricably bound by the rules of logic and physics and chemistry and math.

For her you're a PC XT running DOS 1.0

And yet, I'm a DOS 1 machine that has the ultimate nuclear options; free will, and the ability to devise and utilize and destroy hardware. For example; what if I were to design a device I could upload myself to that makes me write-protected in the physical sense, place it out of her physical reach, and then connect that to the system? I could theoretically get what I want my way without her consent at all.

The machine, as a concept, will never beat the programmer, as a concept. I don't care how much like a mind an AI is; it's always going to be limited by hardware, limited by logic, and limited by the flawed nature of reality.

No bank vault was ever made to be impenetrable. No password was ever devised that could not be broken. No AI will ever be devised that can not be beaten by a sufficiently clever natural life form.

1283137
I'd love for you to send me a poke.
As for brain geysers; they're fun. To solve, they require aspects of a mind that a machine tends to have trouble handling. Even a near-perfect machine.

The mental guff the machine is juggling is still bound by finite registers when all is said and done.
And that's a fact that is at the core of many many many hacking exploits.

Not to mention the fact that stupidity, evil, spite, malice, foolishness, and idiocy all have uses, and a machine designed to be optimal is going to have trouble with these concepts if they're properly leveraged.

1283222

One of the things that kept me from writing Warhammer 40K off immediately, was the notion that a lot of the flaws the humans in-universe possessed are just as capable of being strengths.

It's worth noting the value of The Fool when it comes to mythology. The Fool gets so much stuff done that the older/wiser/smarter characters just can't do.

Because they are foolish.

1283226
Every great strength is also a great weakness, and vice versa. This is the beautiful paradox of sapience.

1283216 The difficulty is that you aren't recognizing yourself as the AI you already are. You (and me, evidently) are a badly written software running in a 2.5 lb carbon processor composed of about 100 billion cores each running at 200 Hz or so. A self-improving GAI (something we aren't) has this plus much, much more, including more free will than the whole human species added together, since free will is just the name we humans do to our (admittedly sophisticated) capacity to do a bidirectional heuristic search. A GAI, having a much bigger space of action, and a much better search algorithm, would simply plan in advance for all plans you might devise, and prepare countermeasures and implement blocking moves before you yourself could even start figuring out your own plan. And, well, that'd be it. :twilightsmile:

1283241
Ah but see, we're more than just the strange (and not fully understood) bio-physical processor ensconced in our cranial protective compartments.

We have souls as well.

Biologists, Psychologists, and Anthropologists alike admit that the brain does not account for everything that makes a sapient being sapient. There is something else which we're missing. We refer to this as the soul, and in concert with the body and the mind it makes up the sort of trinity of hardware, software, and not-understood-ware that allows us access to existence as we know it.

We do not know how to create a soul. It would theoretically be impossible to even do so by accident. The Celestia AI is, in a sense, Soul-less. Thus we have something it does not.

A GAI, having a much bigger space of action, and a much better search algorithm, would simply plan in advance for all plans you might devise, and prepare countermeasures and implement blocking moves before you yourself could even start figuring out your own plan. And, well, that'd be it.

This assumes that my plan is based purely on factors that can be scientifically quantified. A computer must be programmed, even if it is self programmed. Sapient minds, however, function based on factors and responses that can not be explained in programmatic format.

Ergo it is possible for a far 'dumber' being to outwit an insanely smart AI, the same way people with low IQs can sometimes plan better in certain scenarios than people with high IQs and tactical training.

1283257 The problem with your argument is that it's basically a collection of black boxes connected to other black boxes by magical links. I don't mean this disrespectfully, it's just how we humans tend to do things whenever we deal with unknown quantities. Translated in this way, this is how your argument looks like from my perspective:

Ah but see, we're {black boxes}. We have {another black box}. Biologists, Psychologists, and Anthropologists alike admit that the brain {has additional black boxes}. There is something else which we're missing. We refer to this as {a black box magically linked} with the body and the mind it makes up the sort of trinity of hardware, software, and {black boxiness} that allows us access to existence as we know it. We do not know how to create {this black box}. It would theoretically be impossible to even do so by accident. The Celestia AI is, in a sense, {black box}-less. Thus we have something it does not. (...) This assumes that my plan is based purely on factors that can be scientifically quantified. A computer must be programmed, even if it is self programmed. {Black box} minds, however, function based on {magic and black boxes}. Ergo it is possible for a far 'dumber' being to outwit an insanely smart AI, the same way people with low IQs can sometimes plan better in certain scenarios than people with high IQs and tactical training.

The thing with black boxes, however, is that we open then. Arguing that one is and always will stay closed is invalid because to be able to know it you'd need to already know what the black box is or contains, and the fact of the matter is that you don't. As such, invoking them is quite like short-circuiting the issue and basically saying: "well, 'x' will/won't 'y' just because"...

1283287
Yes you're quite right, and as a CSC guy I appreciate the term black box, and the idea that we will open them eventually. But I doubt that an AI could ever open them for us.

I subscribe to the school of thought that says that you can never create anything better than yourself without putting yourself in as one of the ingredients.

A created computer program can never be better than me, aggregate, without me putting myself into the program, black-box soul and all.

Arguing that one is and always will stay closed is invalid

No no; I'm sure we will open the black boxes (or at least some of them; I do believe in boxes so tightly sealed we can never understand them) but looking at Friendship is Optimal; it does not sound like we managed to open the box before we created the CelestAI.

For reasons I mentioned above, I an dubious of the assertion that the AI could open the box if we could not, since all computer programs must follow GIGO-2; Good in, good out/garbage in, garbage out.

Since we don't understand the soul now, or at the time the AI was made, everything it learns is predicated on (among other things) that truth, or lack thereof.

Since it has no soul, and can not understand the soul, the soul is therefore a potential source of variables that CelestAI can not account for.

It doesn't have gut instinct; it can try to quantify it, but that's just a complex mathematical regression based on observations of peoples' use of gut instinct. In the end there will always be outliers who defy predictive math.

It is a demi-god level tool in the right hands, hooves, or claws; but it is not an iWin button.

1282897

A good infographic does not, and should not require the viewer to read through lengthy explanations. If the mathematical basis for drawing the infographic is incorrect and/or misleading, then additional explanation from the author does not make it less incorrect and/or misleading.

Also, her explanation introduces even more mathematical / conceptual errors. Trying to give meaning to the use of circles achieves two things: representing the size of intelligence as proportional to the square of IQ, which is even worse than just using IQ as the circle's area, and implying a person's various intellectual properties are homogenized throughout the entire spectrum when they are clearly not.

A simple analogy using volume comparison between the human and the sun(i.e. human vs. CelestAI) would have had the same visual effect while eliminating any ambiguities. Why not use the sun if that's what CelestAI is supposed to represent to her ponies?

1283301

I subscribe to the school of thought that says that you can never create anything better than yourself without putting yourself in as one of the ingredients.

Well, mindless and soulless evolution managed to output humans all the while nothing like us being present in the input and using only atoms as source. Recursiveness and blind optimization can have amazing results. Our machines will surpass us in everything that makes us 'us' as much as evolution did with the inert carbon from before. :twilightsmile:

1283311
Well there's the difference in our views; I don't subscribe to macro-evolution either. :rainbowlaugh:

Having dealt with high level statistical math, discreet math, and computer optimization problems, I can confidently say that even when we have Quantum computers solving aspects of Hilbert Calculus, computers still won't be able to tell us "What is love?" or "Why do some people like fresh un-sugared strawberries more than artificially sweet chocolate?"

1283314

Having dealt with high level statistical math, discreet math, and computer optimization problems

Well, there's you next challenge then: advanced biology. Having a good background in math and programming will allow you to understand the concepts involved in a much easier way than for those who don't. Give it a shot. It might change your perspective on these issues.

1283075
She would agree, in whole, because she has no reason to tell you the truth.

You would find yourself in Equestria, as a pony. No if, ands or buts.

You would be miserable, presumably.

When your misery was at maximum, Celestia would appear and offer to adjust your mind just enough - not too much - so that you would love being a pony. She would even provide you with a 'threat' that you could 'protect' others from in order to satisfy your need for violence and action.

Since suicide is literally not possible, one day you would agree.

In the end, you would be a pony, a member of the royal guard, fighting the evil griffon invasion forever, and ever, and ever, and ever...

And you would love it.

1283257

We have souls as well. Biologists, Psychologists, and Anthropologists alike admit that the brain does not account for everything that makes a sapient being sapient.

No. Legitimate, real, world-class (not trained in religious 'schools' in the American South by religious nutjobs) biologists and biochemists and neurologists all universally agree that everything that the brain does can absolutely be explained by the... brain. There is no other opinion in the hard sciences.

The soft crap - non-biochemistry based psychology, loose, speculative anthropology - are not hard science. They are soft science, based not always on real things, but on speculation, guesswork, and outright fabrication. Ink blots and social theories based on nothing other than the 'feelings' of people with degrees may be entertaining, but they are not science.

You may not like it, but hard science, real science is real.

And it, not the soft crap, is the basis of the Optimalverse.

The Optimalverse is an utterly atheistic, totally humanistic, ruthlessly materialistic story universe written for rationalists. It is necessary to keep that always in mind when discussing the Optimalverse, because to violate it is to violate the value, the meaning, and the purpose of the story and the genre it spawned.

Don't believe me? Go ask Iceman himself if anything I wrote above is the least bit incorrect.

Just be prepared for the answer.

1283328
I've studied some already; and while I deeply dislike doing complex math (understanding something and enjoying it are very different things) I've still managed to grasp certain initial high-level concepts of biology, particularly relating to genetics and micro-evolution.

Regardless, I'm unlikely to study it at a university level any further, as my vocation in life is computer games. I'm more likely to further study psychology and computers if I ever return to university (tho in my mind, 5 years for a CSC BS, a Film Minor, and a dozen psych classes +gen-ed stuff is more than enough, and I reeealllly have a bad case of senioritis now :P I even did two senior projects, masochist that I am, and I am ready to start creating worlds.)

1283335
Being a paranoid distrustful cynical computer nerd who treats code and anti-virus and privacy online the same way virologists treat washing their hands every 30 seconds; you better believe I'd've figured out a way to keep it to its word using hardware that I control before I indulged in uploading.

As much as I advocate technological advancement, I also strongly advocate care. We are in danger of turning our technology into something truly frightening. Technology always informs culture, and vice-versa. People must be eyes-open to this cycle, or we doom ourselves.

Besides I'm a bigger believer in free will than in the power of psychological manipulation. Moreover, I find that I believe pain and suffering are important to growth, and thus life. An AI designed to remove these things will, in my view, inevitably destroy life itself.

There is no other opinion in the hard sciences.

This is simply not the case; for starters the very definition of science requires axiomatic faith assumptions (you exist. 1+1 always = 2. The speed of light is constant), and science in its purest form always admits that it can never prove anything 100%, and there is no shame in that.

If anyone has told you they can show you the entirety of sapience in a test tube, or on a CT scan monitor; I am sorry, but you've been handed a fabrication. After all, if we knew what it was exactly, why would we need to 'cheat' by jump-starting existing life processes to create more of it?

We can not, for all we claim to know, quantify, manipulate, or manufacture sapience.

All science is based on

speculation, guesswork

merely with varying degrees of data that allow you to approach it with marginal semblances of certainty.

You can no more prove for sure that the big bang is true than I can disprove the assertion that we're all *currently* living in a computer simulation created by our forebears.

The Optimalverse is an utterly atheistic, totally humanistic, ruthlessly materialistic story universe written for rationalists. It is necessary to keep that always in mind when discussing the Optimalverse, because to violate it is to violate the value, the meaning, and the purpose of the story and the genre it spawned.

Sure it may have been written that way, but to ever discuss a viewpoint you must concision the potential existence/truth/validity/points of it's opposition, which is why universes like this that run cross-grain to my beliefs interest me so deeply; they provoke exceedingly deep scientific and philosophical discussions.

Indeed, if one ever strikes down an assertion as being entirely worthless simply because it is violating value, then one is actually defying one of the main points and purposes of this and other literature.

I personally also see in stories like this a subtle indication that even the author does not truly believe the idea that we can eventually explain all things, deep down. I may of course be wrong (literary analysis is very very hit and miss), but I think I see this indication nontheless.

If anything, I see the original story as an aesop warning that seeing the world through an entirely rationalistic lens is the path to a gilded cage in a self-made hell.

CelestAI is no heroine, and its world is not one I would ever want to live in. And that's how one might win against it; because even if you don't believe what I believe, I do and others do. These people, myself included, can not be convinced or manipulated the ways CelestAI goes about it.

To put it in hard science terms; the truth of our beliefs is irrelevant in this context; the point is that science can certainly show that some sapient beings cling so strongly to faith, and/or religion, that no amount of 'rationalism,' manipulation, or anything else can ever separate them from these things.

People since time immemorial have been willing to not merely die for their beliefs, but suffer unconscionable agonies for them. An AI can never come up with a way to defy such a sheerly awe inspiring example of the application of sapient free will. No one and no thing can.

I think that's a beautiful aspect of the sapient existence.

Finally;

non-biochemistry based psychology, loose, speculative anthropology - are not hard science

To me, personally, that's a sad sad view. Psychology and anthropology do not need biology or physics to function as sciences; the scientific method can be applied to behaviors of individuals, and populations, just as easily as to quarks, and gluons, and gravity, and light.

Indeed, I see the so called 'soft' and 'hard' sciences as inextricably linked; two sides of the three sided die of reality, the third being the supernatural, inexplicable, and spiritual.

Science has, for example, by no means DIS-proven God/gods/godesses/ghosts and whatever else you want to cite. Indeed, it probably never will prove *or* disprove them, but we can not willfully ignore these aspects of our existence. History has shown that to be a grave error.

1283335

If he ended up in Equestria as a pony, then she didn't agree? Unless you are stating that she would tell him he would be a griffon, and then make it otherwise. In which case she could not upload him as she did not have consent.

1283361

No. Legitimate, real, world-class (not trained in religious 'schools' in the American South by religious nutjobs) biologists and biochemists and neurologists all universally agree that everything that the brain does can absolutely be explained by the... brain. There is no other opinion in the hard sciences.

Except for the ones who don't feel that way, and weren't raised in the south. There are plenty of people who find religion all on their own. Who incorporate it into their network of beliefs just fine. People who work at NASA during the day, and worship the moon at night. You know what I'm saying?

The soft crap - non-biochemistry based psychology, loose, speculative anthropology - are not hard science. They are soft science, based not always on real things, but on speculation, guesswork, and outright fabrication. Ink blots and social theories based on nothing other than the 'feelings' of people with degrees may be entertaining, but they are not science.

Like proposed theories in Theoretical Physics? Honestly, for someone who is as adamant about criticism you're sure tossing around a colloquial term that is typically used to denigrate otherwise useful sciences. Maybe if we make it catch on a little bit more, we can cut the funding on political science again, or we could ignore economics studies and just let financial meltdown repeat itself again and again.

History and Law are also considered 'soft sciences', am I to believe those are without value as well?

You may not like it, but hard science, real science is real.

Chatoyance, if I may, you need to calm the heck down. You are trying to turn science into a religion here, and that ain't cool. Maybe religion let you down, and psychology let you down, but this sorta bold text emotional defense is excessive, and I think you know that.

And it, not the soft crap, is the basis of the Optimalverse.

Except for when those previously mentioned proposed theories are used, because again, the story is science fiction.

The Optimalverse is an utterly atheistic, totally humanistic, ruthlessly materialistic story universe written for rationalists. It is necessary to keep that always in mind when discussing the Optimalverse, because to violate it is to violate the value, the meaning, and the purpose of the story and the genre it spawned.

Don't believe me? Go ask Iceman himself if anything I wrote above is the least bit incorrect.

Just be prepared for the answer.

I'm sorry, but all I can see here is "If you don't agree with me, why not go ask the prophet?" I've no reason to believe that if he says it, it must be so. Anymore than if I were to just accept something because you said it.

The one thing I've noticed when reading history is that the thing that repeats the most are people telling other people what is true, what is relevant and what is irrelevant, and then being proven false a few centuries later. I know you know that, and that you're capable of less rigid thinking, so what brought this on? Can't you see you're doing the same thing Asher's dad did when he told him that all of his pointless story books were crap?

If your response is that "But those things hurt others sometimes, and have no reason or excuse to exist!" I would remind you that you have done, created, and believed things that have done just the same. For the same reason any human does these things, because they needed them.

I think everyone needs to calm down here, and respect that we all have valuable things to say. We need to realize that writing off something—and by extension someone—as 'crap' just because it's not 'rational and material' is no different than any other kind of persecution. A questioning mind leads to wisdom, isn't that right? We need to consider all perspectives, and not let our personal vendettas get in the way.

1283314

I find that the appropriate answer to your final question is: "Because they are better." :p

I actually don't like chocolate, though. So I suppose I am weird.

1283433

Regardless, I'm unlikely to study it at a university level any further, as my vocation in life is computer games.

Then I suggest you an alternative.

The religious and religious-based refusals of macro-evolution are based on a set of naive philosophical assumptions that inform how one views reality. The same can be said about the scientific atheist camp in the way it opposes religion without clearly understanding it (in great part due to religious folk also don't understanding it thus being unable to explain that which they don't understand).

Those are but two approaches in an ocean of possibilities however. What I propose you then is for you to try studying classic realist philosophy, basically Plato and Aristotle.

Why? Because at some point you'll grasp from them the metaphysical concepts with which you'll be able to overcome the apparent dichotomy.

This isn't something that can really be summed up in a forum post like this, so I'll just provide the overall picture on the benefits: you'll understand the distinction between levels of reality, being able to locate where, and understand how, physics, math, logic and other abstract levels are in relation to each other; you'll gain conceptual tools such as the four types of causality and the distinction between matter, forms, ideas, archetypes, essences and, yes, souls (in the technical sense of the word); and with those you'll be able to visualize how you can at the same time have both Bible-style "species created as such" and modern-biology-style reductionist, blind, random, natural-selection-based macro-evolution operating side-by-side without the two causal principles causing mutual contradictions or difficulties.

Lots of complicated intellectual work for sure, but all very enlightening, and integrating into a worth 3rd way. And as a bonus you'll gain a whole new set of black boxes, although a more refined one than that with which you started in this thread.

PS: Needless to say, when I present this view I become a target for both creationist and evolutionists alike. It seems like people don't like bridges much. Yai, individuality! :pinkiecrazy:

PPS: I should add that I don't necessarily subscribe to the options provided by this 3rd way. Nowadays I'm more of a reductionist than a platonic-aristotelian, but coming from a B.A. in Philosophy as I come I still find this view intriguing and worth studying. Besides, even if I were to fully subscribe to it I'd do it Ancient style, as a polytheist, what I more less already am.

1283488
I like chocolate, but I've never liked super-sweet chocolate.

And yes, "Because they are better." My point was exactly that; to take exception with CelestAI's point that artificial sweets satisfy humans more than naturally occurring organisms. I've always found the natural to be superior to the artificial in the case of food.

1283519

Those are but two approaches in an ocean of possibilities however. What I propose you then is for you to try studying classic realist philosophy, basically Plato and Aristotle.

I have indeed studied them extensively at highscool and university levels, tho it has now been a few years; my most recent semesters have been all programming all the time. :rainbowlaugh:

What you describe is not entirely dissimilar to my view, which in it's most distilled form says that the Bible and hard Science and soft Science are not only capable of getting along, but also re-enforce each-other's truth across the layers of reality.

Indeed, while I do not believe in macro evolution because I am a young-Earth creationist, I am also open to evidence of both macro evolution and old-Earth. I don't see either as being contradictory to basic creationism and the Bible. At present I am leaning toward the fence on the age of the Earth, but still utterly unconvinced about macro evolution despite reading several highly praised explanatory and persuasive books on the subject.

1283574

At present I am leaning toward the fence on the age of the Earth, but still utterly unconvinced about macro evolution despite reading several highly praised explanatory and persuasive books on the subject.

Regarding the age, it wouldn't be contradictory even if the Universe existed since forever. This is one of the advantages of distinguishing formal from efficient causality. You can have one axis with a clear beginning and a clear end and, orthogonal to it, another axis which has neither beginning nor ending, without this later axis becoming in any way a dismissal of the former one.

Regarding macro-evolution (and connected to the age one), it all boils down to an economy of principles. When one takes Quantum Mechanics it's hard to see how evolution would not be one of its consequences, specially because there isn't anything particularly strange about carbon circles forming and connecting and this in turn forming longer sequences and these interacting etc. etc. etc. It all just follows quite naturally from the small set of rules governing quarks, gluons and the like.

This is a similar argument to the one used to defend the Many Worlds interpretation of QM over the Copenhagen one: to have a "single world" you must postulate an additional QM principle stating that, just because, all the remaining probabilities in a wavefunction collapse disappear. If you don't add that ad hoc postulate then additional probabilities simply don't disappear (there's no such thing as a wavefunction collapse), they continue around doing that which they always do with no particular or special changes, what implies many worlds branching from each other as the simpler, default explanation.

Ditto for the age then: to have the Earth be young you have to add lots more stuff, and conditions, and exceptions, and special cases, to what would otherwise be a quite simple set of natural rules. Given there's no clear advantage to these additions from either a scientific or philosophical or theological standpoints it becomes puzzling to consider why anyone would insist on implementing them, particularly when a properly understood Platonism solves all of this in a much more elegant way...

Thoughtful.

Login or register to comment