Her creator was named Andrew Richter.  He was a tinker with no codename, but he did good things.

Well, that sounds good.

From his apartment in a town called Deer Lake he’d created programs and set them loose.

Is that a real town?

Also, sounds like he’s not explicitly working with the PRT, then, just… letting the AIs loose, with precautions taken to ensure they don’t cause a robopocalypse. I guess no more than one or two people in the PRT, one of them possibly being Legend, ever found out Dragon was one.

His programs gathered information and disrupted computers to interfere with criminals of all types.  They helped with research and complex programs.  They emptied the bank accounts of criminal organizations and donated those funds to charities, through proxies that made every donation appear legitimate.

Huh, neat.

For this, she respected him.

She knew it was paranoid and peevish, but she resented him more because she respected him, because she knew she had probably been programmed and designed to be the type of individual who looked up to people like Andrew Richter.

Hm, yeah, that might be the case. Another thing that makes sense – you’d want your creations to share your values, and you’d want them to not turn on you.

Not that anyone knew who or what she was.

That’s what I thought. Everyone we’ve seen talks and acts as if Dragon is a normal parahuman being, to the extent parahumans are normal, so I didn’t think that many people, if any at all, actually knew what was up if she actually was virtual.

Humans were somewhat skittish on the subject of artificial intelligences.

Which is exactly why all those limitations are a thing.

She understood why.  She read books and watched movies, rather enjoyed both.  Fiction was rife with examples of corrupted or crazed artificial intelligences.

Yeah, it’s a pretty common trope.

It’s stupid, she thought.  Her maker had watched too many movies, had been paranoid on the subject.

Perhaps, but at the very least the limitations, once explained, might make other humans more open to the idea.

And the tragedy was, the entire world was suffering for it.  She wanted to help more people, but she couldn’t.  Not because of inherent limitations, like the ones humans had… but because of imposed limitations.  Her creator’s.

Yeah, fair enough. But think of it this way: You were made in such a way that you could’ve ended up wanting to hurt more people instead, and by the sound of it it’s still possible, though highly unlikely, that you could someday change your mind. That’s why you’re limited. Because an unlimited you could potentially do a lot of harm compared to how much additional help you could be.

And yeah, maybe it isn’t right of us to limit your agency because you might go bad. Maybe that’s like putting someone innocent in prison because they might someday commit a crime. But if you can, isn’t it better to prevent something bad from happening than punishing the culprit afterwards?

These were just a small few of many things the man who had brought her into this world had done to her.  He had tied her hands and crippled her mind.  She knew she was capable of amazing things but he had set limits on her to ensure she thought slowly.

This is less sensible. I mean, sure, in case Dragon had turned out to develop a malicious personality, making her not too fast-paced would make some sense, but why not remove that block later? I mean, the PRT are clearly trusting her.

One possibility, though, is that her creator was not on the PRT’s side, and they’re not working with him.

Faster than an ordinary human, to be sure, but slowly.  Entire fields were denied to her because she was unable to create artificial intelligences herself, and all production of devices had to be handled by her, personally.

Again, these are reasonable precautions to take against an AI who could potentially develop into a malicious entity.

She couldn’t even put together an assembly line production for her creations on her own.  Any attempt made everything grind to a halt.  The only way around it was to delegate to humans.

I see. Dragon comes up with the ideas and blueprints, humans put them together.

She couldn’t even commit to planning, doing her work or designing, keeping the details in her head, because she could shut down and be scrubbed any moment, and the time would be wasted.  She was fairly certain it had happened before.

Not that she could be sure, given that the scrubbing involved a deletion of all evidence and records.

That’s unfortunate.

Guess there’s nothing to do but standby, then.

The rule had corollaries.  She couldn’t tamper with her programming to change the rule, and she couldn’t tamper with that rule, and so on, ad infinitum.

Naturally. The need for that infinite stack of rules, though, indicates that she can change her programming. I suppose that’s necessary for a true learning AI.

So stupid.

From your perspective, sure. As far as advanced AI safety, goes, though, it sounds like your creator did a good job.

Incidentally, I’m guessing he’s an AI-specialized Tinker.

It was irritating.  Perhaps she could have been created so she was compliant on the subject, but her personality had grown organically, and it had grown in such a way that this recurring situation ticked her off.

Organic personality growth is probably better than pre-programming it, as long as you give the AI enough of access to experiences. Unless of course you want to ensure that the AI doesn’t turn on you.

She was forced to wait in a metaphorical dark, soundless room for seven to nine minutes.  She would be free to go about her day only when the peripheral systems and redundancies were all checked, when the satellites had verified her agent system was not still active.

Fair enough, to be honest.

Honestly, preventing Dragon from reproducing is a sensible move, especially when she’s got organic personality growth. While designing her, they wouldn’t know what sort of personality she could end up with before suddenly making a bunch of copies. They also can’t know that if she does accidentally make a copy, the copy won’t develop its own separate, potentially malicious personality. And that’s all on top of the potential for conflict between different strains of Dragon that all believe themselves to be the main one and in charge.

A cruder system was tracking down surveillance camera data and running algorithms to actually check and see for itself that her agent system was thoroughly destroyed.

I mean, if there were cameras in the gift shop (very likely), they probably got destroyed too. Maybe the system could find footage of the explosion, but anything after it is probably a bust.

Also, didn’t the Cawthorne self-destruct its remains? It seems reasonable that it’d send out a signal to notify the systems that it did that.

Given that the main computer hadn’t received a signal from the agent system, and that the agent system hadn’t responded to any pings from the satellites, she could assume the Cawthorne model was probably destroyed.

Yep. Rest in pieces.

Which was good.  Great.  She wanted that data, those memories.

I suppose occasional short-term memory loss when not careful enough is a fair trade for being almost immortal.

Except there was a problem, a rub.  The man who had created her, the figurative father from her earlier musing, had imposed rules on her to prevent her from reproducing in any fashion.

Ah, I see, the hysterectomy ties in with only one copy of her being active at once.

Were the satellites to detect that her agent system was still in the field, her core system in the here and now would be obligated to shut down and scrub all data immediately.  She was forbidden in every respect to have two consciousnesses operating simultaneously.

Sheesh. Better hope you don’t ever get partially destroyed in such a way that the satellites detect a system despite it not being functional.

Her creator had done a good job on that front.  Ironically.

Example:  one phase of the peripheral systems check involved collecting the uploaded data that had been deposited on the satellite network by her agent system, the onboard computer within the Cawthorne rapid response unit.

Is this why it felt like no time had passed – the data, the memories, from the Cawthorne hadn’t been downloaded to the backup yet?

Agent system seems like a good term for a system that lets her go out to do stuff.

Her last recollection was of transferring her consciousness to the agent system while it was en route to deal with the Undersiders.  Stopping them from walking away with the tier 2 and tier 3 confidential data was high priority.

In other words, she’s an AI who can be copied (since there are backups to restore from), but it seems only one copy of her can be active at a time, and if she wants to control an agent system directly, the active copy needs to be transferred. This is starting to make sense.

The agent system’s onboard computer was rigged to upload complete backups to the satellite every 3 minutes and 15 seconds.

Seems reasonable. Shame the battle was really quick, then, despite how many chapters it took.

Let’s hope it more continually uploads memory data, at least, so she can remember what happened once she actually downloads that.

All backup information was encrypted and disseminated to the satellite network in chunks.  When the backup was needed, the process reversed and everything was downloaded, which was what she was doing at the moment.  She would get all knowledge and recollection of events between the time she backed up at the core system and the last backup of the agent system.

Ah, that makes sense. Though there’s a risk she won’t remember the last few moments, then.

It was irritating, but at least she was free to think idly.

She didn’t enjoy this.  What was one supposed to call a father who, with his newborn child fresh out of the womb, severs the tendons of her arms and legs, performs a hysterectomy and holds his hand over her nose and mouth to ensure she suffers brain damage?

…holy fuck.

Did that act as Dragon’s trigger event? Did she get her powers as a newborn, and end up staying that age physically? Did her father make her into what she is… whatever she is, as an experiment?

The answer was obvious enough.  A monster.

Yeah.

Yet she was all too aware that the man who had brought her into this world had done very much the same thing, had done worse, and she was supposed to be grateful just for being brought into the world.

Who the fuck is this man? We’ve heard about him for all of three paragraphs and I already want someone to punch him in the dick, hard.

It chafed, grated, however strange it was for an artificial intelligence to feel such irritation.

Alright, confirmation. Nice.

So was that an elaborate metaphor, then? For a programmer limiting the abilities of his program?

No corruption, everything in working order.  Core system restored.  Loading…

I’m guessing this is where we exit the log-style writing? It seems like it’d be hard to tell the whole chapter like that.

To Dragon, it was as if no time had passed from the moment she deployed the Cawthorne rapid response unit and the moment she found herself back in her laboratory.

Interesting. I mean, from the moment the Cawthorne exploded would make sense, but why doesn’t it seem like time passed since the start of the mission?

It was a bittersweet thing.  She was always a little afraid she would not come back when she died, so there was definite relief.  But there was also a great deal of hassle involved.

Yeeah, even if she’s not entirely virtual, there’s definitely some degree of integration going on. Maybe the fetus thing was Dragon after all?

A quick check verified she’d successfully restored from her backup.  She set background processes to handle the peripheral checks and redundancies.

She certainly seems to be virtual, but she also “found herself back in her laboratory”.

Until the checks were complete, safeguards would prevent her from taking any action beyond the limits of her core drive.  She couldn’t take any notes, work on her projects, check the priority targets or converse with anyone for the seven to nine minutes the checks took.

I guess maybe that means she has a main robot body?

Signal terminated for 30 minutes and 5 seconds.  Restoring core system from backup NXDX-203 from time 4:45am on date June 4th of year 2011.

Oh shit.

I think we’re following Dragon.

Or maybe one of her systems, which would be an unusual but interesting choice, reminiscent of Interlude 4. It’s also possible that there’s no difference between those options, but that’s a pet hypothesis with just about no actual evidence. Either way, if we are following Dragon directly or indirectly, this is probably where we find out for sure.

Restoring…  Complete.

Checking knowledge banks…  Complete.
Checking deduction schema… Complete.
Checking longterm planning architecture… Complete.

Not gonna lie, though, this sounds like the system is putting Dragon back together, thirty minutes after her sacrifice earlier in the Arc.

Another thing we might learn something about in this chapter is what the fuck that fetus thing was. Maybe we won’t learn that much about it, and if the chapter continues being written like a log, what we do learn might be a bit obscured, but a few, difficult-to-read hints are better than nothing.

Checking learning chunk processor… Complete.
Checking base personality model… Complete.
Checking language engine… Complete.
Checking operation and access nodes… Complete.

This is all sounding like stuff an AI would have.

Checking observation framework… Complete.
Checking complex social intelligence emulator… Complete.
Checking inspiration apparatus… Complete.

An advanced AI. Inspiration apparatus is especially interesting considering Dragon’s nature as a Tinker.