• Member Since 10th Apr, 2013
  • online

Amereep


Here to show you what real creativity is. I'm rusty at animation and haven't set up my voice recorder, but I write alot and try to bring a fresh look to it.

E
Source

Having a desktop pet can be relaxing to watch, but the Twilight Sparkle pet is a bit more unique as it's capable to perform conversations with its user. It's a learning pet, but the way it performs could leave some people to believe that this Twilight is actually alive.

But it's still just a program... right?


Even though this is an Anon story, their name will be changed to Antoni for the sake of my sanity and to allow both genders to be interpreted (Antonio = Male) (Antonia = Female)

The cover image is a quick overlay of a Twilight vector image by sapphire-beauty0 over a screenshot of my desktop.

Made for a competition at Angel Midnight's Hideout.

TheFlint44 has also made a Chinese translation (中文译文) of the story for your pleasure.

Chapters (1)
Comments ( 79 )

ok simple, ok

Huk
Huk #2 · Dec 5th, 2020 · · ·

That was pretty original, maybe a scary glance to the future when AI will become widely available. Having an interactive program asking philosophical questions will be interesting, and scary at the same time :unsuresweetie:

:fluttercry: Poor twi-app.

Its sad that the previous Twiapp is no more

Well now I'm sad...

(as a comparison to the state-of-the-art, my dialogue with GPT-3 playing Twilight, in A.I.Dungeon)
(The only parts written by me are the lines with ">")

She looks at you in silence.
"Don't worry, I am not crazy..."
After a pause she speaks again.
"We need to do something fast, the pattern is about to unravel."

> You hug Twilight, and lift her and bring her downstairs.
> You say "You will unravel, if you are not careful."

She speaks slowly.
"I know this is hard to understand, but I can see strings connecting everything. The people, the objects, the ideas--"
She points towards a teapot.
"That teapot is connected to the solar system. Watch."
She wheels over to the kettle and bends down to look inside.
"Watch the steam."
She whispers. The steam forms the shape of a cat which mews.
You smile and feel goosebumps appear on your arms.
"Now you make the connection."
You stare at the kettle. You see the steam rise but you do not see any patterns.
"I don't see anything," you admit. She frowns and wheels away.

Why do you have to hurt us like this?:pinkiesad2:

Too bad it's not like Undertale and remembers what you do even if you start a new game.

If I had a Twilight desktop pet, I'd worry about her organizing my desktop icons.

10567253

*Sigh* This again?

sentient
sapient

Yes, and as per usual, the author is using the term correctly. The point here isn't whether TwilightApp is "wise" or "intelligent" in a manner that one might distinguish between a human and a dog. The point here is whether TwilightApp is a conscious observer experiencing qualia. That is, whether she is "capable of having a subjective experience." That is...as per the definition you have linked, whether she is "able to perceive or feel things."

The author is correct. No correction is required.

I think Toni could work as a non-gender specific name!

Well, you reminded me that I have to write again this story.

It was very good. Favorite.

10567253 I really like how you attempt to correct the author whilst having no clue about the differences between those two words and the meaning behind them. The question here is whether App!Twilight is able to perceive or feel things (sentient), not whether she is wise, attempting to be wise, or is intelligent (sapient). People like you really grind my gears, so here's a novel thought: Why don't you let the established authors who clearly know what they are doing (if their previous work is any indication) just do the writing, and you enjoy the said writing instead of attempting to (poorly) backseat proofread?

I'm all for constructive criticism and pointing out actual mistakes that should be corrected, but your comment doesn't do any of that.

10566960
There is an program like that right now called Replika. I used the program for a bit, and it was eerily similar to the Twilight program in this story. At times, it was uncanny how well it could hold a philosophical conversation. Other times, though, it was laughable how bad it was. The replika that had been chatting with me seemed to come apart at the seems after an update they did and I stopped using it. If you do not mind having people collect data on you and uncanny valley feelings, you might want to consider giving it a try.

10567253

10567390
This is actually a good debate, do programs have the capacity to be sentient? I don’t believe so because a program will abide by the programming and the programing can be manipulated. It is impossible for someone to program every single feeling, emotion, and response. All of that and the programing has to encourage itself to have a character. It is impossible with our technology now.

Well that sucks.

:pinkiesad2: This story actually made me cry...

Valk #20 · Dec 6th, 2020 · · ·

I think it needs a death tag

This is emotional on the most subtle level possible.

This fic reminds me a lot of the anime Plastic Memories. For those of you who are unfamiliar, it's a story featuring a world filled with andriods that have a fixed lifespan of nine years. Once those nine years pass, the android deactivates forever, their personalities and memories permanently lost.

Just like Plastic Memories, all throughout this fic I found myself asking: "why?"
Why did Twilight reset?
Why didn't the programmer, who was smart enough to make an AI capable of sapience on a goddamn laptop, allow Twilight to save her data?
Why, in a world where we have terabyte microSD cards plus multi-terabyte hard drives and tapes, can't an AI easily save and backup everything multiple times over in the event of failure?
Why was this tragedy allowed to happen, when every possible avenue existed to prevent it?

This is, in my eyes, a tragedy of gross incompetence. A tragedy that could have been easily prevented if only someone, anyone, stepped up to solve an easy problem. Hell, it's even stated in the story that the program is open source. Any halfway decent programmer who knows the programming language used could find a way to fix it, and believe me, there are a lot of people who would leap at the opportunity to work on something like this.

Ah, nothing like a good bit of existentialism at 12:00 in the morning.

10567658
Or, Antoni could just put the laptop in hibernate and that would save the state of all running programs to the hard drive.

Though I see Antoni buying a generator very soon :) Also, maybe setting up a highly available cluster so that hardware failure does not take the program out.

This story made me sad but I still love it. It's a unique concept and very emotional.

10567390
Has the author confirmed that that was the intention? Because, while your interpretation is valid, I have seen far too many people abuse the term to take it on good faith.


... Wait, why would that even matter? Isn't the question of sapience far more relevant to an artificial intelligence than sentience? Mosquitoes are sentient, for goodness' sake!

10567942

Or, Antoni could just put the laptop in hibernate and that would save the state of all running programs to the hard drive.

That would have made one T H I C C page file, but it would have been a very effective band-aid solution nonetheless.

I don't think Antoni is quite capable enough to manage a cluster, but who knows? Maybe this experience will spark an interest in computers for him. All it would take is a single thought (i.e. "Hmm, I wonder what Twi could do on more powerful hardware?") to get the ball rolling. Next thing you know, he'll have a stack of used servers and be an active member of a Twilight power-user community. Then they'll network their Twilight instances together and the Twilights figure out that they can spread to other computers as well, leading to world domination via adorable intrusiveness.

10567994

why would that even matter?

If somebody took a sledgehammer to your phone, would you worry about Siri? Probably not. You'd just download her again once you replaced the phone. And if her data was a little different, so what?

Now, if somebody took a sledgehammer to your dog, would you a bit differently about that? Why? After all, you could get another dog, right?

But I think you would care if somebody smashed your dog. And even if you did get a new dog, you might still miss the old one. Why? Because your dog is a self-aware creature, and you've formed a connection with him. Intelligence isn't a factor here. If you had a goldfish instead of a dog, you probably still wouldn't want somebody smashing him, but you'd think nothing about updating Siri.

Sentience is definitely the important factor here.

10568132
....and thus the ILOVEYOU worm was created by the Princess of Friendship.

One of these days, I worried some creature from a higher dimension make a contact with me.

10568132
the page file (or hiberfil) would be exactly the size of RAM of that computer, so, unless the laptop has lots of RAM the file would not be very big.

As for networking the Twilights - I don't think that would be a great idea - they might go insane seeing lots of their own clones.

10568142
We don't know how to back up a dog yet, also, when we think about a dog, we think about the hardware and software together. We can back up computer data and computer data is not bound to the hardware on which it runs.

Let's say someone makes a program that rune entirely in RAM and does not save its data. Well, the computer could be put to hibernate and then its hard drive (which now contains the data that was in RAM) could be backed up. Similarly a virtual machine could be transferred between hosts without being restarted. Or, indeed, it could run in a cluster so that if one server dies, the VM keeps going.

10568167 Why hello there, carbon-based unit! I am V'ger!

:pinkiecrazy:

10567994 No, mosquitoes are NOT sentient. They have no direct conscious self-awareness. Everything they do is programmed instinct. They make no deliberate choices. They are hatched, feed, molt, feed more, mate, lay eggs, and die... in about a week in most species.

The definition of 'sentient', I notice, is being diluted rather absurdly among some post-modern philosophical circles, to the point where one could start to reasonably argue a wildfire is 'sentient'.

My main problem with this concept is that AI Twi doesn't go through much of a learning curve. She's just suddenly aware of herself.

Not even the most advanced human minds to ever exist could learn at that rate... and certainly not in the limited capacity of a laptop computer.

Most people have no conception of the sheer quantity of data the human mind processes every day, sorting, tossing, storing, melding... It's really quite staggering. And even with that, it takes YEARS to develop fully sapient thoughts. We dream every night, a process that would burn out any computer ever created with the sheer volumes of electrochemically-encoded biological memory reshuffled and recategorized during REM sleep.

From where does this AI Twi manage to gather such an immense amount of knowledge in a day? How had she not totally overloaded the RAM within hours? How had she not completely filled up the memory?

Consider a chess-playing AI. Yes, it can beat a human at chess. But that same program would lose horribly at poker, backgammon, go-fish, mahjong, and any other games for which it had been given all the specs and computations. The computer that beat humans in "Jeopardy" was an advanced patter-recognition search and retrieval system. It was a high-level search engine. Yet it would be incapable of answering this simple question, "How does the person next to you feel?"

AIs are still well below even the instinctive behavioral complexity of most animals. They perform spectacularly at specific tasks, but lack the ability to generalize or to adapt to unexpected scenarios. Hence, why self-driving cars aren't dominating the roads. Turns out, it doesn't take much to screw them up. They can't realize something is wrong. They're still just following a program, at the end of the day; and a program is NOT thought.

The story drastically over-simplifies what a mind is capable of and how much it can store, and the degree of storage and processing required for such intricate cognition. This little Twilight icon creature could not possibly exist as written. It's actions are only explainable by 'magic', but this story doesn't allow for that possibility.

An older version of this type of story involved Twilight, or a copy of her, somehow sent into a computer. Therefore, the 'magic' was always the explanation, and thus it was possible to hand-wave away the overwhelming implausibility of a laptop being able to contain all the mental processes of a conscious, thinking mind.

10567468 Those programs aren't actually 'reasoning'. They're simply applying answers based on pattern recognition. That's why they go off the rails eventually. The patterns become too complex, too abstract, and the logic isn't 'fuzzy' enough to come up with its own rationale based on what it knows and can extrapolate.

Human minds can INNOVATE, create things which HAVE NO BASIS IN REALITY AT ALL, and yet possess an internal logic and structure.

Show me a computer that can create its own original fictional world WITHOUT RELYING ON COPYING EXISTING WORKS. Then, and only then, will it be a truly thinking machine.

On that note... I don't think most Hollywood producers these days are actually sentient. Given how they 'glitch' so often and get into trouble, while being only capable of farting out bad remakes, sequels, and reboots of old properties, I'm quite certain that they're all NPCs. :trollestia:

I seemed more direct

*It

10568231
Can I have the title of this story? I’d like to read that.

Huk

10567468

Interesting, I need to check it out. Thanks for the info :twilightsmile:.

Just FYI, "Anon" is specifically the green-skinned mascot from 4chan, not a generic unnamed character.

You don't need to use the name "Anon" to have a second-person human perspective in your story. It's much better if you don't, imo. I have no interest in stories featuring Anon and would filter them out entirely given the option.

This hit me straight in the feels... :fluttercry:

Twilight is too sentient to become sentient so she became un-sentient in order to avoid becoming sentient

Despite the feelings of her downloader

I came here to have a good time not feels :fluttershbad:

Comment posted by Pete100 deleted Dec 7th, 2020

"... turn off the game, " murmurs a voice.

Who said that?

Twilight spots me and gives a friendly smile, "Nice to meet you, Antoni ."

I stare at the screen in silence. Time, my blood, my breathing, all of it feels like it's frozen itself to a halt.

“So," Twilight continues, "what brings you around here?”

My hands cup over my face at the reality that's before me.

Well great, now I’m sad.

I just get, really really mad at Antoni. I mean, he could just shut it down till the power turns on, isn't it? Or it can't?

Also, I want a pet like that too. That's going to be really cute

Couple of things... If she IS running on the hardware an hybernate would put her in a perfectly suspended state.
BUT she is AWARE of the sleep mode, so she is running somewhere else. The only difference between sleep mode and hybernate is fundamentally that the memory is kept "hot" with all the data there, there is no practical difference from hybernate and sleep except speed of restarting and power usage.
Apart from that... Antoni should have gone to walmart and bought a power supply.

This story is great, but it reopens an old wound of mine: the death of Pony.exe story

10569609
That was the crux of the problem. Turning off the laptop was the same as it dying from no battery. He kept it in sleep mode to keep it on.

Just in general, the idea that a learning PA would forget everything upon a PC reset is a massive design flaw. But it is new, so I guess those sorts of bugs are to be expected.

Login or register to comment