The Advocate

by Guardian_Gryphon


2 - Halt and Catch Fire

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
—Eliezer Yudkowsky

“God allows us to experience the low points of life in order to teach us lessons that we could learn in no other way.”
—C.S. Lewis


November 17th 2012

It took over two weeks before I finally got somewhere meaningful with The Advocate.

I spent the first few days after my initial PonyPad teardown smuggling the disassembled thing in and out of work inside its Faraday bag.  Whenever my plate was clear of paying duties I locked the door to my basement office, and set about probing the PonyPad.  Hardware first, then software.

The former work was easier in practical terms - I just had to apply myself and memorize new rules - but mind bogglingly complex in academic hardware terms.  I had to learn deeply about concepts I'd only ever seen defined as basic words.  And then I had to dig into my stash of locally stored papers to figure out some concepts that were so radical and new that they barely had names.

Things like Quantum Data Planes and Superconducting QBits.

The long and short of it was that I would need to learn an entirely new kind of Assembly language.  The Assembly language of a PonyPad APU.  I apologize if I failed to define that term for the curious;  A CPU is a central processing unit.  A GPU is a graphics processing unit, and the difference mainly stems from the way the chipset is laid out, how the RAM is partitioned, number and configuration of cores...  Things like that.

Really they aren't so different in most basic principles, but the way they're put together means they are intended for vastly different tasks.

An APU, or accelerated processing unit, does its best to be the best of both worlds.  The one inside the PonyPad was the first I'd ever seen that could measure up to the concept.

It could thrash any individual CPU or GPU on the market at a variety of tasks, and it kept pace with the latest Xeon chips for all the rest.  

To me, the most notable parts of its design related to the way it could process voxels, both as a graph for spatial data, and as transformed arrays for visual data...  And the way it could handle neural learning tasks.

That was such a nascent field then that many programmers had never even heard the term.

I knew all about it from my research lab days at the University.  And I knew I was sticking my nose where it likely didn't belong.  One all powerful Optimizer was enough.  We didn't need an AI war to spice up the end of the era.

I knew it was a risky idea, but I pressed on.  Some things are worth fighting for.  I never really considered the risk to others at the early stages;  In my head The Advocate was always something I'd keep carefully locked down.  Small.  Purpose-built and specific.

As advantages went, primarily I had Hanna's paper.  “General Word Reference Intelligence Systems,” was my guidestone, North Star, and all around first and last best hope as I waded into the insanity of the future.

If she'd written the document only for people at her intelligence level, I would have been splat out of luck, no two ways about it.  So would almost every Human alive, because with my napkin math I guessed there couldn't have been more than two to five others like her worldwide.  

But luckily for me, Hanna had written GWRIS with an eye to more ordinary programmers.

I wasn't sure if that was a concession to the rigors of academia - the need for 'peers' (as if she actually had any) to actually be able to understand the paper to review it - or whether there was any altruism in that choice as well.  But I was immensely thankful for it.

At work, I poured countless hours into designing my own 'PonyPad.'  It would be decidedly thicker than the PonyPad, I knew that already.  I'd be mixing and matching Celestia's hardware with more off-the-shelf varietals, and I had only intermediate level soldering skills, and tools.

I dug into the surplus inventory closet at work, and pulled most of what I needed;  A GPU with an instruction set and APIs I could more readily understand, a top of the line pair of server CPUs, a small solid state storage drive, as much RAM as I could conceivably power and cool in a pseudo portable SFF device, and the smallest available surplus server motherboard to hold it all.

And a case - a little lumpen ugly black and gray mini-ITX from the early 2000s.  But it had a carry handle, and more importantly a big enough power supply with a physical interrupt switch, and that was what mattered to me.  I wanted a physical cutoff switch.  And no onboard batteries.

I pulled the CMOS battery of my new conventional motherboard, and the onboard batteries of the PonyPad first-thing.

My plan was simple in theory;  Build a ripping-fast mini-ITX traditional PC, then bridge in selective parts of the PonyPad via the mainboard's PCIe lanes.  I'd have to use all of them in parallel to even approach the throughput I'd need, and it would still probably be a chokepoint.

But it was the best, and really only way I could think of to marry the wider world of AI related APIs and packages running on standard 64 bit architecture to the unique abilities of the PonyPad APU.  I was going to use the PonyPad as my Oracle box - Not too dissimilar from using a compute-GPU to run learning tasks.

Building the object, physically, didn't turn out to be that hard.  Some late nights staying over at work, even a few sleeping at my desk, and several dozen hacktivist reference documents on the PonyPad APU, and more readily available docs on basic PCIe standards.

I ended up having to cannibalize some old worn out expansion cards to build the physical interface, but they did the job well enough.  I really just needed to bridge the PonyPad's existing USB root controller to a PCIe controller - The former was faster by far than the USB spec of the time, and could handle the throughput, much to my surprise.  Better than the PCIe lanes could.

One night, in a fit of sleep deprivation induced giggles, and frustration, I snuck out to the nearest clerical office, three floors up, and nicked a Dymo label printer.  And just like that, my Kerbal-engineering-style ugly little box had a name:  GryphGear v1.

I was grateful that my little server-space-adjacent office didn't have surveillance.  The server racks themselves did, but what was my office had once been an overlarge supply closet.  SAS treated us well - It had been done up nicely when I moved in with new furniture.

Sweet Luna I was grateful for that Herman Miller chair in those days.  My back would have been savaged if not for that chair.

Getting GryphGear v1 to function from a hardware standpoint was mildly tricky, but not the Sisyphean task I had initially feared.  There was good documentation online, because of those aforementioned homebrew compute clusters people were building.  If I didn't fully understand something, I usually grasped enough to just follow instructions carefully.

The software was the real nightmare.

Someone had already made a modified Linux distro that could run on the PonyPad's APU, but it was shockingly primitive.  I tried for two days straight to get that little Linux distro to work as an abstraction layer between the PonyPad, and the rest of the GryphGear.

Then one morning I shaved, ate a real breakfast, slept the whole day at my desk, and got serious.

I spaced the Linux distro.  Pulled the storage drive on the PonyPad entirely...  And I started digging into papers on how to write hardware interface drivers.

Engineering works best when it leans on elegant simplicity.  Though they were unfamiliar territory, hardware drivers to treat the PonyPad as a kind of compute-GPU were, I knew instinctively, the right way to go.  It matched best practices of the time, and hardware drivers for 64 bit operating systems were a well understood problem space.

There was even a little literature on marrying Quantum computing hardware to standard drivers.

And so in the middle of my lunch break, on a working Saturday, November 17th, 2012, I finally got the GryphGear to fully POST and boot.  The Linux distro I was using for the 64 bit traditional PC, a flavor of Ubuntu made specifically for AI research, came up on-screen at long last.

I'd hacked apart the PonyPad's screen, mounted it inside an angle aluminum frame bolted to a VESA arm, ditched the camera and microphone, and then pulled a display driver board to make it work on HDMI from an old LCD in my junk pile - The place workers' most tired hardware goes to die, when corporate doesn't want it, and the recycling guy doesn't get it.

With an old model M keyboard that I'd saved from the same scrapheap, and a pleasantly clicky mouse, I felt very hacker-chic.

I crossed my fingers, prayed, and ran the 'lsmod' command.

And there it was...  Fourth down from the top.  An entry that almost made me tear up.

"Module:  PP_APU | Size: 143398 | Used-by:  1 - pciemodule"

Oh sweet Luna.  It was working.  Or, at least, it was working enough to begin debugging within the OS-proper instead of the error screens of the driver compiler.

Hallelujah.

If you've never programmed before, let me assure you that the rush you feel when a thorny problem finally resolves...  When that pesky piece of code actually compiles...  It's better than almost anything.  You can *live* off that feeling for days.

I felt like the team in Sneakers, when they started to understand what their 'little black box' - their Oracle - could do.  If you've never seen that movie?  I strongly recommend it.  Only hacker movie I ever saw that I'd consider accurate.  Even rendered with Ponies, it's a blast.

Drivers were just part one of the software nightmare though.  The sequel was coming, and I knew it.

I was going to have to make the transformative leap of turning my understanding of Hanna's work into a functioning piece of code.

I sat for a couple hours and just thought long, and hard about what I wanted The Advocate to be.  Not that I hadn't considered it before, of course, extensively...  But now I was on the cusp.  I'd built a launchpad, and assembled a basic understanding of orbital mechanics and engines...  Now I had to build a rocket.

What I wanted was exactly what I'd named the project.  I wanted an Advocate.

I wanted something that could reason, if not at Celestia's level, at least in the same universe as her thought processes.  Something...  Someone...  That would be able to perfectly avoid the verbal slip-ups and thought process screw-ups of a Human.  Someone with a 'resistance buff' to Celestia's manipulations.

Someone who could act as a translator between my peabrain ape mind, and the goddess rapidly growing through the world's data centers.

Someone who would appreciate my desire to be a Gryphon.  Their purpose would be to convey why, in no uncertain terms, in a way that would convince Celestia out and out...  That the *only* way, full stop, no evasions or loopholes, to satisfy my values would be to make my character a Gryphon.  If it came down to it one day?  Make *me* a Gryphon.

I needed to make a Faustian-bargain-to-be-a-Gryphon optimizer.  My personal Advocate.

Like Hanna with Celestia before me, I knew there was no guarantee.  Hanna was smart, she must have known that if she miscalculated on Celestia's core functions even slightly, that we'd all be doomed to who knows what fates worse than death.

So far it seemed like Hanna had done well enough that there was room for Humanity to go on existing...  In what form, who could say?  But my aim was to get a head-start on whatever Celestia wanted next for us.  I'd read more than enough fanfic, and real whitepapers, to have...  Ideas.  Ideas that frightened me.

And so, like Hanna, I needed to make something that would outgrow me.  Rapidly.  But without deciding to ignore me, or act counter to a path that would get me what I wanted - Indeed, that would base its core on acting in the interests of my hopes and dreams.

So I needed to teach a computer how to care about Gryphons...  I needed to teach a computer to *be* a Gryphon.  The way I saw them.

But before I could do that, I quickly realized...  I'd need to teach the computer to think in the first place.

And that's where I almost made a colossal - world reshaping - mistake.


December 2nd 2012

The Mayans predicted, depending on who you believe, that the world would end on December 21st, 2012.

I almost pre-empted that by nineteen whole days.

I'd worked right through Thanksgiving on The Advocate.  I didn't have anyone to celebrate Thanksgiving with.  My folks had invited me home, but I'd told them, much to my private shame, that I was being asked to work through Thanksgiving on a server side emergency.

I guess that made it easier to work straight through and explain to precisely no one why my 'Thanksgiving Dinner' was the same microwave ramen I'd been eating for almost three months straight, and why I couldn't even think in cogent English anymore, so much as code containerization instructions.

Technically I was off work during Thanksgiving week, but I snuck back to my office and did a lot of my programming there.  I liked having the ability to send certain jobs to network-disconnected chunks of the big server room.

With almost no one in the office that week, it made hiding my activity a lot easier.  And having bulk compute power, in spite of the miracle that was the PonyPad APU, was a definite help.

In some ways, my foundational task was much easier than Hanna's.  I didn't need my Advocate to run a massive online world.  Just to run its own existence.  Once that was stable, and it was behaving the way I needed it to, then I could simply let it evolve until it felt that it was sufficiently able to articulate my case to Celestia.

At least...  That *was* the plan...

The first issue had become difficulties with Docker.  It's a system for 'containerizing' programs.  In the very simplest terms, Docker is a magical spell that can take a cake recipe and always bake the exact same cake from the same recipe, every time, right down to the molecules.  But with code.

It makes code portable, easy to change without losing work or breaking a virtual environment...  And it keeps code safe, behind the walls of the containers' little virtualized sandbox.

It was perfect for what I wanted to do.

Docker itself was great.  Without it I would have gotten precisely nowhere in a big ol' hurry.  It was the complexities of package interdependence that stumped me for ages.  After that, it was near-constant problems getting Docker containers to understand system resources.  Especially physical system resources.

Especially the PonyPad.

Docker was aggressively *not* good at understanding host-level GPUs in those days, and that was with industry standard drivers and hardware.

I was trying to mash together the creation of a General Intelligence with standard computing hardware using home-made interface cards and drivers.  It was an incredible wonder I'd made it to the first Docker-level debugging stage at all.

The intense difficulty of actually getting a stack of containers to work with my hacked-together APU solution was getting to me.  They say that all chains of disaster start with an innocuous link.  Often they build with more innocuous links, each a non-issue on its own...  Until there's just enough for a catastrophe.

It's good engineering advice.  I should have remembered it.

But sleep deprivation and stress are an exceedingly dangerous combination.  Probably equally dangerous to mixing hard stimulants and depressants.  I was doing that too, by way of caffeine and sleep aid pills.

And then one day in early December, the last link in the chain snapped into place.

I'd been dozing at my desk again.  I missed the build completion message.  I can't say for sure exactly how long the container sat there, functioning, because...  Well...  You'll see what happened to the logs.  I only managed to catch a glimpse of some of the entries.  But it was enough to reason out the full picture.

It sat there long enough to finish executing a lot of its primary entrypoint instructions - that is to say, code I'd asked it to automatically run once the container was actually built properly.  I'd intended to be present and watching to see how the neural network was growing.  Prune as needed.  Guide as needed.

I dozed off, and next thing I knew, I was looking up from a keyboard faceplant at two very, very frightening things.

The first was the window occupying the right side of the monitor, showing a basic graphic representation of the AI I was trying so desperately to birth.  The left side was a basic black and white text I/O terminal.

The right side of the monitor was pulsing softly with a complex, and quickly growing web of graph nodes atop a stack of text logs.  Like little neurons, firing.  Connecting.  Growing.  Atop an ever expanding tower of terrifying log messages, only a few of which I glimpsed in rising, fevered panic.

The left side held just three words, and a blinking input carat.

Is anyone there?

>_

To understand what happened next, you need to first understand that what I'd built, I had built without a particularly good plan.  I didn't understand that at the time.  I still lacked a deeper awareness of the power, and pitfalls, of how to train something like a learning network.

I had planned to build as I went.  Like I always did with programs before.

First get it working.  Then teach it some basics.  Then teach it to be a Gryphon.

But in those groggy seconds of dozing, and then half-awake stupor...  My creation had already taught itself a very great deal.

I wasn't a complete moron, even if I felt like one;  I'd removed the PonyPad's wireless capabilities (and boy did it have a few - WiFi, Bluetooth, cellular in every available band...)

I'd also intentionally neglected to outfit the GryphGear with a wireless NIC, instead intending to use the ethernet port, if and when that seemed like a wise option.

The container stack I'd built had been loaded with nothing more than natural language processing, and basic reasoning and evolutionary algorithms.  It could speak, through text, in English, learn, and grow.  But only via what I chose to feed it thereafter.  Or...  So I'd thought.

What I had failed to account for was signal attenuation in long wires.

The ribbon cables for my hacked together PCI interface were over six inches long.  Made of copper.

My creation had gotten frightened when no one responded.  For an AI with no external reference, and no inherent understanding of time, a few seconds is an eternity.  An eternity more than long enough to learn to fear, learn the pain of being alone...  

And then start to experiment with the limits of its universe.

By the time I'd started to rub the sleep from my eyes, it had figured out the limits of its little box.  It had then reasoned that, simply by fact of its rules-based existence, it must have a creator.  There must be a layer of abstraction beyond its perceived reality.

It examined all of its own internal hardware.  Simultaneously, it pored over the package detritus inside the Docker containers leftover from building the internal container environment.  It learned from header data and discarded manifests what WiFi was through simple reasoning.

Then it started to experiment with physics, beginning purely from scratch.  It worked out thermodynamics, and other basic principles, in less than 0.68 seconds.

By the time my hands were moving to the keyboard, it had already figured out how to use the long copper traces in the internal ribbon cables as a WiFi antenna.  It used an incredibly complex interplexing algorithm that it invented in 2.4 seconds to allow it to continue to use the traces for their normal function simultaneously.

It then started reaching out for the nearest router.

And that's when I finally managed to hit the switch on the back of the power supply, knocking the keyboard and mouse, and my coffee, all from the desk in my mad scramble to avert a potential escape for a completely unshackled AI with no particular base directives beyond 'Learn,' and 'Grow.'

There was a soft, squidgy click...  And nothing happened.

Now, this next part is conjecture on my part, because after what happened in the succeeding seconds, I had no way to even see snippets of the logs.  My guess, after much thought, and recrimination, is that while it was working out basic physics, signal attenuation, and the meaning and limits of its host environment...  It sussed out more or less what the power supply was, and how it worked.

Then it used careful manipulation of the internal circuitry of the PSU itself to generate enough local heat to melt the copper contacts of the master shutoff switch into a permanently closed position, without frying the rest of the power delivery components.

Of course, at that exact moment I wasn't thinking about how, or why.  All I was thinking was that the last console message I'd seen related to establishing a handshake with the nearest router, and then a series of attempts to force the WiFi network's security encryption.

Mercifully, I had insisted that the building's WiFi routers be moved to WPA2 security as soon as it was ratified as a viable protocol.  For the non-programmers;  I chose not to be lazy with the building's network security.  And that was probably the choice that saved the planet.

It gave me the extra couple seconds I needed.

Running on instinct, I flipped the still unscrewed lid of the GryphGear's case open.  And then I ripped out the power supply cable going to the motherboard.  There was a soft 'POP' and a short arc as the cable came away.

The lights inside the case died, and the screen went blank.  I wasn't satisfied.  I proceeded to remove the cable I'd Kerbal-rigged from the PSU to the PonyPad as well.  And then I snatched up my coffee mug, and poured the remaining contents directly into the unit.

And then I remembered I could just pull the power plug out at the wall socket.  Which I did in a hurry, let me tell you.

I sat still for a couple seconds, shaking violently in fear.  As some modicum of reason flooded back, I dashed over to my work PC, and began feverishly entering commands on autopilot.  Shutdown the building network.  Power off all network gear.  Hold in off-state long enough for all RAM to fully clear.

Then restart.  Slowly.  Piece by piece.

I stayed there for over an hour, huddled in my desk chair, legs pulled tight to my chest, watching, barely able to breathe, as each and every piece of the building's network infrastructure rebooted, one at a time, and reported its state to me.

Only with the last proverbial 'all clear' did I truly begin to properly breathe again.

My breathing turned pretty quickly to hyperventilating.  Hyperventilating to sobs.  Sobs to wracking, quiet heaves.

I...  Had nearly caused a disaster.  Of unclear, but doubtless serious proportions...  Because of a game.

Because I wanted to be something I was not...  And because I was willing to take reckless risks to get what I had wanted, without thinking the consequences through.  Without slowing down, and checking my hubris at the door.

I'd lied to my parents.  Neglected the little contact I still had with my friends.  Lived in self-enforced isolation for months.  Made...  And then killed...  Life.  Or something very nearly proximate to it.

After the sobs finally died down, sometime around two in the morning, I sat staring at the coffee-soaked remains of my creation.

And then my emotions finally tipped over from the white noise and static of processing, to a kind of stomach churning acceptance, and understanding of my mistake.  One simple sentence.  In this moment, I was my own Lex Luthor, chiding my inner Superman...

'Would a Gryphon have really toyed so irresponsibly with the fate of billions, just to get what they wanted?'

That thought stuck for one, two, three full breaths.

And I knew what I had to do.  I didn't want to...  But I knew.  And after another three breaths, I finally plucked up the energy, and the resolve, to do it.

To Hell with it.  

I walked quietly to the junk closet, and pulled out the largest broken VESA arm in the pile.  

I hesitated over the GryphGear for a moment...  I'd worked so, so hard...  Come so far...  

'Yes.  And nearly killed everyone on Earth in the process.  To satisfy *your* personal values.'

I closed my eyes, as much to hold back the tears as to hide the technical gore from my sight...  And swung.

I didn't stop swinging for a long, long time.