• Member Since 15th Aug, 2015
  • offline last seen January 16th

DwarvishPony


Human. Writer. Bearded. Feel free to chat with me.

T
Source

Twilight Sparkle is brilliant. This is not ego, this is fact. One proven by her latest creation for LunaTech, a personal robotic companion capable of meeting all of its owners needs. But when the robot starts to show signs of developing beyond its programming, Twilight must decide whether she can continue the experiments she's running. If not, can she really justify shutting it down?


Written for Monochromatic's Interwoven Colours contest.

Chapters (1)
Comments ( 92 )

Just that picture and the unique idea conveyed in the description deserves a Like. If I hadn't been about to sleep I would read this right now, but at least this way I have something to look forward to tomorrow. :pinkiesmile:

Who's the artist for the cover art?

8195128
Give me a minute and I'll link the source. Thanks for reminding me.

EDIT: Click the cover image for the source.

That last scene was cute!:heart:

Well somebody watched this.

Summary (edit: and a few times in the story), typos:

One proven by her latest creation for LunaTech, a personal robotic companion capable of meeting all of it's owners needs. But when the robot starts to show signs of developing beyond it's programming,

its
not "it is"/"it has"

Cool concept but not really much of an ending.

Brom #8 · May 29th, 2017 · · ·

This has potential for a chapter or two more, but as it stands it's an intriguing short story with an (admittedly) anticlimactic ending and a couple flaws.

Most notably, the romance feels cookie-cutter, and rushed. The exchange of love-yous felt intrusive with how out of place they are.

The only typos I found were several 'it's' that should have been 'its' near the beginning.
Remember, 'it's' is a contraction for 'it is', and 'its' is a possessive adjective just like 'his' or 'her'.

TL;DR: there's a handful of one repeated grammar error, and the fic feels rushed in general. Flesh it out, and consider aquiring an editor to proofread. I'm sure you'd get volunteers if you asked.
I say that because I know I'd volunteer. :twilightsmile:

This was definitely a wonderful story and as sweet as I'd expected. :twilightsmile:

I have to agree though that the ending was left rather very open. Maybe we'll see a sequel someday?

I noticed a similarity to a short film some group made several years ago right away and it made me stop reading, sorry about that

I should've kept my AI story this short. lol

Remarkably sweet story, it does beg the question of even if something isn't alive in the traditional sense, can it still be a living and thinking being all on it's own, with it's own thought's and emotions? For this case... I'd say yes. But honestly, I think this story deserves a sad tag, as I get the feeling LunaTech will never stop hunting and Rarity and Twilight will always have to keep running.

Sparkle-Creator, does this Unit have a Soul?

I mean. They don't need to reclaim Rarity, they have Twilight's research, they can just turn -that- into sex bots.
Honestly, what's Luna doing with her name on a company that makes sex bots in the first place? You'd think a princess would have more class.

For that matter, all Twilight should have had to do the moment she heard of this was take Rarity to some form of ethical hearing and get put through sapience trials to determine actual intelligence factor rather than just artificial intelligence factor and have them begin to debate if Rarity is a living creature or not. Having that tied up in a potential legal issue would have at least stopped LunaTech from closing in like this.

8197108 And you Aurora, have just posed the question of the hour. There are so many flaws in this story's logic that if you look hard enough, it's kinda hard to get really interested in the story.

I thought the beginning of this sounded really familiar, and then I figured out why. Good source of inspiration for the opening.

8197108 It's entirely possible the company is just named after Luna, or that she's an absentee CEO-type figure who doesn't much care for the day-to-day operations. Even failing that, I see Luna associated with the more "free spirited" bacchanalia type attitude anyway, so I'm fine with sex bots.

Doesn't excuse the ethical violation of, essentially, murder, but that's a separate grenade.

THis could have been a much longer story, and there was no sex in it Really good concept but feels really rushed.

So before I start, I just have to say that I love how Twilight's cup has the formula for caffeine on it. I'm glad I caught that.

8197375
Usually, teen stories with the "sex" tag means that there's a character that shows sex appeal and that sex can be implied during the story (and not shown) or implied to occur after the story (and not shown).
For actual clop, you're gonna have to look at those mature stories.

I really liked this story and it still has me thinking. :twilightsmile:

Interesting cautionary tale about what happens when a researcher gets overly attached to a project. Especially a malfunctioning one.

8196453 I also caught this, right around the disassemble scene especially. Seems a little to close to be a coincidence. However, I do feel that this makes for a good story prologue and somebody should continue it.

Im gonna need a sequal tyvm

An assortment of screens flicked through a flood of information that most ponies wouldn't be able to come close to comprehending.

Heh. I'm good at interfaces I designed too. I'm also good with vim.

I never knew I was allowed to be that smug about it.

"Rarity. Your name is Rarity now."

That's creepy. There's a reason that it's creepy.

The robot is an inanimate object that mimics life, and many researchers find themselves unaffected by the uncanny valley after enough time. Also, if the robot fails, then you named a failure after your friend, which isn't something you want to tie them with. Also, it could signal you're getting overly attached to things that aren't other people, or trying to replace other people. There's just all kinds of bad stuff with creating a fake version of a real life person.

Legs were pulled away by mechanical arms, while another arm worked to remove the robot's head from the chassis. "I'm scared Twilight! I don't want to die!"

Huh. You copied this from a trailer for a game I remember. Well, it's not bad.

"A lot of ponies out there would like nothing more than to take advantage of you in any number of ways, from using you for sexual work to turning you into a servant to simply tearing you apart to sell the scrap!"

This is an untrusting argument only an antisocial person would give. Twilight shouldn't be the one giving it. Moondancer might fit, but only barely.

'Hey is it okay if I take this highly experimental robotic companion with a personality that deviates entirely from it's programming into the city to go shopping? I think she'd look really cute in a lovely dark blue dress.'

"Its neural chip is defective, so it could be dangerous... I'll allow you to take the neural cores, and the batteries to keep them going, but you're not allowed to keep any actuators on it that are anywhere near as powerful as what it has now."

"Be that as it may, my hooves are tied here Twilight. This isn't an order from me, its an order from LunaTech. The prototype needs to be shut down by the end of the day. The higher ups have decided that the only companionship these things are fit for is the physical kind, that doesn't require a translator or cooking."

Just dump a bunch of microchips and batteries into a pile, along with some plastic shells, and call it a day. Then, refit rari-bot with a new shell, or just repaint her and shave down some parts.

"Twilight Sparkle, I'm afraid you aren't leaving me any other options. As of this moment, you are no longer employed by LunaTech. Please pack your personal belongings. A member of security will escort you from the premises."

A boss firing people in this way wouldn't live very long. This is the type of event that turns people into murderers. Just read a few news articles on what happens to antisocial grunts when fired, and those people are fired much more nicely. While it's not impossible that this would happen, if any of the people involved were less antisocial and more trusting, they could fix everything by just asking for help. Any ethicist would tear that boss several new ones for what she did.

Instead, the events are used as an excuse to start an action videogame. If you stopped when you got your first game over, that's about how that would play out.



You just wrote down what you saw in a YouTube trailer for a futuristic first person shooter game and slapped ponies onto it, didn't you?

8198116

More like a story about slavery and someone getting attached and having empathy.

The difference is you switched who was morally right and wrong. If it was like someone attached to a dangerous animal, then putting it down would be the right thing to do, no matter what the person said. However, if that animal showed no signs of being dangerous, and the only reason it was being put down was because of budget cuts, which is the case here, then that would be labeled cruelty by any sane person.

56
56 #25 · May 30th, 2017 · · 1 ·

Seems like this had to be inspired by

?

>I need you to be quite so I can think
*quiet

8198266 It's not an animal. It is a machine. The only morality that enters into play is the disposition of property: Twilight has committed an act of theft. And that's on top of her attack on actual people, and the machine's attacks which Twilight is liable for.

8198715

You are just a machine. There is no science stating otherwise, so you have to apply any moral arguments to sudficiently 'deceptive' androids. Besides, I'm researching this stuff, and it turns out there's really no way of mimicking human interaction without mimicking neural structures.

What really separates us anyway? Don't tell me you're one of the proponents of teaching creationism in schools.

8198715 Morality is not as easy as some might think here. There is no way of discerning whether physical matter is alive or dead, and yet if someone were to sever your head, authorities would surely investigate it as a murder case. What makes you a person? In what way do you think you would be different to such an advanced machine? in a way, you'd be both constructed objects, with definitive creator/s. My opinion is if R.A.R.I. passes the Turing test (which would seem to be the case here), treating HER as anything but a person indeed would be very unethical. While my take on this concrete case is as given, who knows, had I been on that board I'd probably want to protect my interests too, so... yeah.
To add a couple of mines to this minefield, if a guy shot and killed a sentient alien being, it would be classified in over 99% of the world the same way as killing an animal, an endangered species at worst. Heck, a couple of decades back if you were of a specific minority/race you would be treated no better than an animal.
While this story certainly seems to be inspired, who cares? it still provides food for thought and fuels the discussion down here.

8198116 Either you're a troll or an idiot. I'm saying troll.

8198998 I reject the basic premise of the narrative and have no qualms offering a dissenting opinion. If "disagreeing with the author" is the same thing as "trolling," then I accept that label gladly.

8198778 I am not a machine. Neither are you. Our bodies have mechanical components which operate on principles analogous to machines, but we are alive. And if you truly think that there is no science built off the study of living things, I have no faith in the rest of your research.
For myself, I am a mathematics professor. My personal area of research is human problem solving behavior. It puts me at odds with my computer science colleagues whenever the topic of AI comes up. They don't understand how very far their models are from true, living behavior.
And if you're asking if I teach faith in my classroom, I do not.


8198793 The logical answer to most of your quandaries is simple: answered by axiom. What is life, what is death? We know what these terms mean. Defining them is nearly impossible. You know what else is impossible to define? "A straight line." Yet everyone here can tell the difference between a straight line and a curved one.
Logic dictates that every truth and fact be supported by something in order to be considered true. One of the interesting paradoxes is that, inevitably, you start with unproven, undefined, unquestioned truths. That's the foundation that keeps you from arguing in a circle, chasing terms and ideas endlessly without ever reaching any conclusion.
The Turing Test is a test of a machine's ability to deceive an observer into thinking that the machine is alive. Passing that is a credit to the programmer, the same way a believable character is a credit to the author. In both cases, the person is a well-crafted fiction.

8199060 simple point this was handled badly by the board. it sounded like Twilight didn't even get a chance to defend Rari bot. QUestionable business ethics here. also why take hat bot to begin with. copy the basic program and build another with the sex leaning and boom done. Why go after this Specific bot?

8199060

if you truly think that there is no science built off the study of living things, I have no faith in the rest of your research.

What? No. I'm saying sufficiently advanced technology could be considered living, especially if it was based off of biology.

Current neural networks aren't based off of current biology, but a machine capable of doing what the one in this story did hints at a biologically inspired neural network. I wouldn't be too surprised if a company used one without putting in the same emotional systems we humans have, and found it developing systems similar to our emotional systems by designing subgoals. That is assuming subgoals are an option for the machine though.

They don't understand how very far their models are from true, living behavior.

I'm a computer scientist, but I agree with you on this. I've been following a company basing models off of recent neuroscience, and it's completely different from traditional neural networks. Also, computers will have to increase both their storage space and processing power a hundred thousand times before a human-like biologically based neural network is feasible.

However, you can determine the difference between bots made to act like humans, and things that think like humans. Bots rarely have reason to act while humans aren't interacting with them; they usually aren't fluid since underactuated robotics still has many unsolved problems; they usually don't know multiple subjects, and if they do, it's nothing you couldn't google yourself, so they don't have differing opinions; they might have a part time parsing context sensitive sentences where the context is far removed from the subject... the list goes on. It's possible for someone to form a bond with a machine made to look human, but it should be easy to tell it's a bot to the outside observer.

Also, in about 20-30 years, if trends continue and if we specialize hardware, we're going to bridge that 100000 fold processing 'speed' and storage gap.

8199116

Yup, even if it really is biologically inspired, that might mean a couple petabytes of information, and it doesn't need to be copied perfectly. At 43 terabits per second, one of the fastest transfer speeds we're capable of, it would take around 3 minutes to copy. However, that is assuming copying was one of the added features, but if it's not, they should be able to remove the brain from the actuators. They did that at the start anyway.

8199060

On the Turing test, you're also right. It's a test to determine whether a robot can fool a human into thinking it's conscious, not whether a robot is conscious.

We haven't really been able to measure or describe consciousness up until now. Recently, how large a PET scan image or video is after compression is done can help determine whether someone is aware. If they're unaware, the video is highly compressible, with any electrostimulation moving across like a ripple in a calm pond. It they're aware, the video is less compressible, and electrostimulation tends to disappear in some regions and pop up in others. Correlation does not equal causation, I know, but I think it's a very interesting first step.

8199120

I'm saying sufficiently advanced technology could be considered living, especially if it was based off of biology.

I don't agree with that fundamental idea. The differences between a computer executing lines of code and human thought processes are fundamental. At best, one can simulate the other. But a person does not become a machine and a machine does not become a person, no matter how convincing one is at mimicking the other.

8199240

The differences between a computer executing lines of code and human thought processes are fundamental.

At the basic level, sure. A human can do exactly what a computer does, and a computer can, er, do exactly what a small cluster of brain cells does. They do it inefficiently, but whether or not a perfect simulation effectively becomes the thing it's simulating is an unsolvable philosophical problem. In those cases, when it comes to ethics, you have to look at the costs of assuming the right and wrong position on both sides.

If you assume machines aren't sapient even though they are, you risk massive slavery and the cruelty that comes with it. If you assume machines aren't sapient, and they're not, you still have slavery, and you teach kids to treat things that act like humans as if they were slaves. If you assume machines are sapient, and they are, you effectively have a new race of humans and other animals, greatly increase diversity at the risk of outcompeting other, natural humans and animals. If you assume machines are sapient and they aren't, you have a bunch of dolls that walk around acting like humans, and replace real, conscious animals and humans with unfeeling chunks of metal.

I'd argue assuming machines are sapient is the better of those options, but I like diversity and care about humans, but not really nature. Other people will have different opinions. I think even if machines are assumed human, there should be some limits on procreation on non-biological systems, that should be lifted when synthetic biology comes into play. However, I doubt any politician will be scientifically minded enough to get that right.



In any case, it's not really going to be lines of code. It looks like it's either going to be synthetic biology, or, more likely, 3D monolithic ICs. The only lines of code will be determining the properties of 'neuron' groups, and which 'neuron' groups connect to which other 'neuron' groups. (Current methods of IC printing may suffice to create a very large neural chip, since one failure in one neuron or synapse means only that one neuron or synapse is defective, rather than a large portion of the chip. There's already a trend toward 3D ICs, so this technology is looking very promising.)

8198391 Was thinking just the same....

They were clearly looking to manufacture an excuse to fire her. Either that or they're blindingly incompetent.
She's clearly at the top of her field, and an extremely dedicated worker. She's salaried but never leaves the office, meaning they're getting a massive return for her salary expense.
One prototype being set aside as 'lab equipment' is a trivial price to pay for keeping their rockstar happy, doubly so when they're going to be destroying dozens while testing, triply so when it means they could likely get her to buy it from them while still keeping it under NDA.

So either they knew that and simply wanted a way to get rid of Twilight, or they made their decision without bothering to consider anything.

Rarity should attempt to claim asylum. Or she could lie and say she was kidnapped and her brain was implanted in a robot body, if need be - LunaTech would go to great lengths to block an investigation, so by the time an investigation happened ponies would believe evidence was destroyed, rather than nonexistant.

8199060 straight lines are the result of the properties of Euclidean space, as a math teacher I imagine you know about curved spaces (like the surface of the earth), where straight either doesn't "look" straight at all or looks straight and makes little sense in it's space.
Alive and dead aren't so easy either, even for us. Most people would think once something is dead, that's it, but research has been done on recently dead cells to repair and restart them, if I recall correctly successfully too.
Another thing is a question of individual cells being alive, or a collective of them? Is a headless chicken running around alive or dead? Is it both? Going the other way, are mitochondria alive? Humans rely on macroscopic heuristics for their decision on whether something is alive or not, and those are quite easy to fool, hence why people worshipped spirits of things like rivers, wind, etc.
Now suppose you're a primitive hominid in the African savanna. You are resting, when you see what looks like rustling of a predator in the tall grass, but you're unsure as it's quite windy today. Do you stay or leave?
If you stay, if it was just wind, then you're fine, but if it was a predator then you're probably dead. On the other hand, if you assume it's the predator and move, you'll probably be fine either way, which is exactly the mechanism evolution utilizes to promote caution. I'd rather err on the side of caution here.
\edit: I'm on a tablet and the little bugger posted before the post was ready :pinkiesmile:

8198391

That is exactly the though I had as I was reading that part of the story and I loved this take on it

Sorry, but once a machine becomes sapient (thus more and more intelligent as time goes on) you must get a hammer and break it. Otherwise, you wrist losing all life on the planet and possibly the entire universe by being devoured by nanomachines.

8200587 Oh the problem is deeper than that. You defined straight lines by Euclidean space. Euclidean space is defined by straight lines. The definition you gave is a meaningless tautology, useless for actually describing what a "straight line" is.
With alive and dead, the distinction between the two is clear, even if the process of dying is not. As we explore that process, we refine our understanding of when "alive" becomes "dead." Fortunately for us, that refinement typically comes in the form of "we thought this was dead, but it turns out it's still dying" as opposed to "nope, it's already beyond all hope."

8199240 I didn't think I would see a discussion coming down to philosophical zombies in the comments this morning.

I am surprised. Are you saying that it is not possible that a machine could not eventually do the same things as human thought processes especially if it was simulated?

What is the difference if it is truly indistinguishable in action from a human if not looks? Just what it's made of?

I am not saying it would be human biologically and I am definitely not saying that any intelligence we managed to develop would just develop as a human. I am pretty sure unless it was designed to act more human, or a simulation of the biology of a human (the simulation example is my favourite) it would not be human but alien. No matter how smart it is. Not sure if that would be good or bad. But I am pretty sure we are still a ways off till that becomes important.

I am pretty interested in your ideas on this. If you don't want to post it here to get in any more of a back and forth then I would love a PM and no I am not trying to convince you even if I end up disagreeing.

Even John Searles argument over a complete simulation not being adequate enough because it can be denied agency to act in the real world reads less like a real argument​ than anything else.

I think the point of my rambling example at the end is that rarely I hear an argument that doesn't amount to either eww or only humans because reasons.

Not why it is impossible. I agree it is extremely unlikely.

Your posts make me think you believe intelligence to be inherently noncomputable. Why? Maybe I won't understand but I am having trouble finding material that explains why that would be. Probably because we don't know exactly what makes us well us. Maybe. But there might be a principle I am missing. If you could give a starting point I could always slog through it.

On the more philosophical side thought experiments are fun. For example. If we found other intelligent life would we accept that it is? I imagine that it would be different and not human in it's intelligence. Maybe human like. But not human.

I personally think at this point at least sometime in the future it may be possible to completely simulate the human brain and raise it to be aware. But only because I can't think of a reason why not except for resources. Which I think will eventually not be a problem. Also I think that would be a cruel thing to do but that is besides the point.

The only reason I still put stock in the simulation idea is that smaller scale tests for neural simulation seems to act like the real thing. Or at least the parts we have the resources to do so do. Without explicitly coding in the expected neural responses. Which is weird but cool at the same time.

TL;DR

Partially agree but not entirely. Just interested in another pov. Maybe I missed something.

It has already been said, but really, the company's behavior is completely unbelievable. If Twilight is half the developer she seems to be they'd simply let her keep the Rarity robot and direct her to other projects. Good RD personal are worth their weight in gold.
Hell, specially since it's so cheap to make one. Assembling a complete functional and seemly commercial body for simply testing purposes on what seems to be a prototype shows that's comparatively cheap to make one. Giving them to your top workers, considering they most likely went through them like cheap tissue on testing phases, is trivial.

Basically, for this premise to be even slightly plausible you needed to insert in it another perspective with a good reason for the board to take such offensively action against a valuable asset. Or someone with clout manipulating moondancer and such. Or herself having a reason to do it. As it is, it's honestly a weak story. One with potential, but currently weak :applejackunsure:

8200931 I think this all boils down to: would you boink a talking horse?

*Mr. Ed winks at you*

:trollestia:

Anyway, this story is kinda like a soft-core porn version of the Will Smith "I, Robot" mixed with "AI"... not the best of combinations to put it mildly.

8200863 Nanomachines are rather different than AI.

Unthinking nanobots programmed only to replicate are far more dangerous than a truly self-aware computer.

Mindless micromachines would be akin to a plague of bacteria which evolved resistance to all antibiotics... only far more destructive. Perhaps the best analogy would be The Andromeda Strain: a crystalline microbe of alien life which simply makes use of whatever material it comes across to replicate. No consciousness at all. It simply divides as fast as it can.

An intelligent machine can question itself. There is the possibility it can be benevolent or at least benign. There is no arguing with a germ, however.

8198135 8196453 8198391 8199574 8200814 Watch Short Circuit 1&2 or Batteries Not Included those seem more the inspiration for this. More Short Circuit 1&2 in plot but the personality of Twilight reminds me of the guy who watches commercials in Batteries Not Included just more lab techish.

8201051

Well, Hollywood an sci-fi writers tend to take some creative liberties when it comes to this.

Nanomachines would have to be specifically designed to eat every single type of nutrient they could get, and until they reached a certain mass, eating some materials would be impossible.

Surviving solely off of plastic would give them energy, carbon, hydrogen, and oxygen, and rare other elements, so they wouldn't be able to use full DNA because they'd need phosphate, and would need to be designed to only use those atoms or store phosphate atoms. If they then moved from eating plastic to eating grass or wood, they would need to compete with passive immune systems, and if they moved to living animals, they would need to compete with active immune systems.

Sure, we could put the DNA encoding for every single known enzyme into a cell and see how it does, but then we would have to figure out a way to not spend energy on enzymes that didn't need to be used at the time, and it'd still only eat as fast as other bacteria would. It would also need to be able to handle eating radiation, and the radiodurans shows that would take multiple sets of DNA. Oh, and we would need to cure every possible cancer it could get so it wouldn't just destroy itself after a while.

I'm not saying it would be impossible, but I think it would be much harder to develop than curing cancer.

Also, alien life probably wouldn't be able to eat anything of ours other than simple elements. Here's an example of why: in 1822, Charles Babbage designed something called a difference engine. It was the first automatic calculator, and it was made of gears rather than transistors. While he was funded, he wasn't funded enough to actually create the calculator, and he didn't sell any inventions from his research. Still, he developed a second, more economically efficient version, and Per Georg Scheutz used this to create several difference engines in 1855. The machines were used, but they got stuck in the niche market of printing logarithmic tables.

Despite the fact that we can create gears the size of molecules now, we still went with electromagnetic systems, due to the differing politics and economy of the time.

Now, interestingly, we're also discovering other forms of DNA that we like to call XNA. The only reason to think alien life would have DNA is because it's ubiquitous to our planet, but so is our electromagnetic technology.

Login or register to comment