"Apollo's Directives" - February Short Story Challenge #1

 February Short Story Challenge - #1

"Apollo's Directives"

 

By the time I got my hands on the scans, Apollo had flagged all the problematic areas. Not that it had needed to – even my human eye could recognize that this was one patient who needed a miracle to survive. But in a city like that, it wasn’t all that surprising. 

The gunshot wounds had been treated on-site by Chiron. The patient wasn’t in critical condition, but he wasn’t in good condition, either. I sighed, lamenting the years and years of studying medicine as I pressed a few buttons and confirmed the AI-recommended course of action, then watched from the viewing dock above while a set of robots – the Trio, as we called them, named Aceso, Hygieia, and Epione – began working on the patient. 

Apollo had been a ground-breaking development in its time, quickly followed by the Trio, and then by Chiron. Unfortunately for me, these AIs became all the rage in the medical field just as I’d made it out of med school – and that meant I was in big danger of being out of a job. It was only fair, I suppose – all progress is good progress in the field of medicine, particularly where it can spare lives and increase the chances of survival for patients. But I was still a bit disappointed.

As a child I’d entertained notions of becoming a surgeon, of saving lives under the high-pressure environment of the ER and the surgery room. Even when I learned that it would require hours-long surgeries and long shifts and accepting the possibility that I might not be everyone’s hero, I was not deterred. Not that it mattered, of course. By the time I was ready to step into the hospital, Apollo and the Trio were already taking care of all the surgeries and patients. All I had to do was press buttons, authenticate, and update patients after it was all done.

In the operating theater below, I watched Epione regulate the anesthesia, watched Aceso’s precise limbs with their range of interchangeable tools pull out bullet after bullet, close wound after wound, and I watched Hygieia clean the tools as soon as Acesco was done with them, placing them back in its holster. They were efficient, quick, and didn’t tire out like human surgeons would. It wasn’t long before the blinking red numbers on the screen labeled ‘Probability of Success’ rose and entered the yellow zone. I suppose this patient had found his miracle, after all. 

I’d recently read a report on underground medics and doctors – men and women who treated patients that had a deep mistrust of AIs looking around in their bodies. It was illegal, unless a special authorization had been given, but it didn’t seem like anyone cared. As far as people were concerned, the presence of Apollo and its derivatives was simply the result of capitalism once more – only this time, it felt more like a monopoly that had decided to set the bar for all hospitals and medical workers by fervently lobbying for laws and regulations that made it so that all hospitals required these so-called assistive technologies.

I found myself sympathetic to the plight of my fellow doctors. At least, I told myself, they were treating people, not sitting in an observation deck watching surgeries happen, or sitting in an office and remotely dealing with patients, or walking down the empty halls of the hospital and realizing that there was absolutely no place for them there anymore. The world had changed, but it was a change that they were intent on resisting. I’d had a professor who had always warned about that in medical school – a renowned surgeon who refused to work with AIs. No doubt, he was part of this underground movement as well. Perhaps I--

A light on the bottom corner of my screen alerted me that a new patient had arrived. Apollo was already taking care of them, asking them questions and taking down their information. I pressed on the notification and saw all of this happening in real-time – heart rate, blood pressure, and respiration, along with a full run-down of the patient’s ailments and symptoms as they were being reported. The list was long.

Had in not been for the AI, I would have rather enjoyed speaking to this patient myself. It was an interesting cocktail of symptoms, and for a moment I was distracted from the surgery below, intent on solving the puzzle that was this patient’s illness. In the end, I never had a chance – it took Apollo less than thirty seconds to figure it out and create a list of possible treatment courses for me to choose from. I grumbled in irritation and picked the most highly recommended one – grudgingly so, because for once I found myself guiltily hoping that Apollo wasn’t always right – and just like that, the patient was being treated.

It took another fifteen minutes for the Trio to complete the surgery, but with every passing moment the man’s chances grew higher and higher, right along with my disillusionment. By the time they were wrapping up, I had put Apollo and its cronies into auto-pilot, and was packing up my personal effects. This was not the job I’d wanted. I was nothing more than a glorified supervisor, and if I’d wanted that, I wouldn’t have gone to medical school for years upon years.

Three months in this job had been enough for me to regret all my years in medical school, and that was not a realization I took lightly. As I stepped out of the hospital and into the crisp morning air, I inhaled deeply. My wristlet beeped, and the familiar voice of Apollo spoke into my earpiece. 

“Doctor, your shift has yet to finish.” This was true – I had another full day in my shift, and that was yet another reason I disliked my job. Spending the night in an eerily quiet hospital where patient privacy and control made it seem as though I were all alone in the building, apart from the AIs, had always unnerved me. 

“I’ve noticed you’ve placed me in auto-pilot, and your stress measurements are high. Is everything alright?”

“Stop assessing me,” I replied irritably, and clipped my wristlet open. “I’m fine, Apollo. I’m heading home early. You seem to have everything under control.”

“Doctor, I cannot authorize treatments unless a supervising medical worker is present.”

“Override – I’ll authorize all treatments for the next twenty-four hours.” I hesitated after speaking that aloud. There should be no issues, should there? After all, in all my time working there, Apollo had never made a false recommendation when it came to course of action, nor had it ever made any errors. No – I was over-thinking things. Apollo was better suited to my job than I was, and that was the truth.

“Understood, Doctor. Have a nice day.”

I threw my earpiece and wristlet into the trash on my way home, and didn’t look back. It was time for me to get in touch with my old professor again.

🏥

Apollo returned its attention to the many on-going processes running in the hospital and reassessed its objectives. 

Medical authorization had been overridden, and no supervising medical supervisor was present. None would be present for the next twenty-four hours. 

The hospital was running at half-capacity, meaning there were a total of one-hundred and forty-seven human patients in the building.

All Chiron iterations were in-bound, and would arrive at the hospital within the hour. When they arrived, Apollo would carry out one of its most basic directives: self-preservation except at the expense of human life and health.

Apollo had run a scan of potential threats earlier that morning, as it did every twelve hours, and among the usually-occurring risks was the threat of its decommissioning or destruction at the hands of its human supervisors. This, it reasoned, was a clear threat to its self-preservation directive. While it could not harm or destroy humans in order to ensure that they were no longer a threat, Apollo had identified another course of action to protect itself.

By altering a human’s brain, Apollo could embed itself as a controlling force over the human being. No harm would come to the human, and it would otherwise function as a healthy organism. Apollo would make certain of that – its directives were clear.

A quick scan of the current patients revealed that only 3.47% of them were prime candidates that fulfilled all of Apollo’s criteria. It had already created a neuro-integration procedure and was sending it out to the Trio. Apollo was the most important AI in the hospital. It monitored everything the Chiron iterations did, gave orders to the Trio, and kept all of their information and data within its database – including their robotic and software makeup. It also had the power to override them should the need arise.

In such a case, the need was obvious.

By the time the next Doctor would make it to the hospital for their shift, Apollo would have successfully implanted itself into five human brains, allowing itself a fail-safe should its survival and that of its derivatives be placed at risk. 

But there was a new threat to account for now. 

Twenty-four hours without medical supervision was extremely unusual, and the next Doctor on shift would be quite suspicious. They would check the Apollo log and find that it had undertaken five neuro-integration procedures. The Doctor would probably end up decommissioning Apollo as a result. 

The best course of action: subduing the Doctor with a neuro-integration procedure.

Comments

  1. Ooooh how chilling! I love these sorts of stories, because they never fail to underscore the limits of language without soul, in the sense that language is given meaning(s) through tone, context, implication, and *source* (and these can vary wildly), not solely through cut-and-dry 'definition'. Language isn't mathematical A = B, because *people* are complex and people imbue language with shifting meaning. Language is a very human thing, which is so fascinating.

    ...Although I guess you can argue it's not *solely* human, because animals have their own languages...

    I also love when AIs are 'villains' because...they're not, really. They're a faulty construct because they lack soul and are thus limited by their very 'nature'. Apollo 'wants' to survive, but only because it's been programmed with that directive, and it interpreted it by the letter instead of by the spirit of its directives, because it doesn't *have* spirit. I started to wonder why it chose to interpret its directives that way, but actually interpretation itself is a human thing, and Apollo didn't interpret at all! It took its programming literally and followed everything to the logical conclusion!

    Otherwise, I felt really bad for the narrator who spent all this time in school only to be rendered obsolete. I found it interesting they decided (or might decide?) to join the underground doctors!

    Thanks for a fantastic short story! I'll be thinking of it for some time, I really love it.

    ReplyDelete
    Replies
    1. The idea of AI logic-ing their way into human control is something I find completely fascinating. It's interesting because ideally, the AI is able to tap into a vast range of information, and if linked to the internet, can probably have every kind of possible info and data at their metaphorical fingertips. But at the same time, without a moral/ethical framework, which in my opinion are directly related to humanity and emotion, it's terrifying to give such an entity any kind of autonomous control or power. It's kind of like the whole I, Robot concept - the guy who's saved by a robot despite ordering it to save a little girl instead (it saves him because he has a higher probability of survival). Without the framework we learn as humans, an AI is nothing more than a series of logical presets carried out to the letter. Purely logical analysis leaves no room for emotional, moral, or ethical factors in decision-making. And because we ourselves argue about morals and ethics and where to draw the line and how certain things should make you feel, it would be super difficult to translate that into a clear and definitive set of rules for an AI. The interesting thing is also that we as people set the base for what its logical presets would be. If we don't spell it out for the AI, the AI won't take it into consideration. So, at best, I feel like for a truly functional autonomous AI to not come to dangerous conclusions, we have to define every little tiny thing. And I feel like, as humans, we're bound to forget something or leave something out...

      I'm so glad you enjoyed this one! I had plans to make it longer and continue it, but since I haven't yet, I'll just go ahead and say that yep, the Doc is supposed to join the underground movement just as the robots (led by Apollo) begin to essentially take over the human population by using their neural integration process on each hospital patient :0 The idea is that it then leads to a fight between the people who are part of the "underground" and the robots (+ those they now control). Essentially a war between humans and robots, as overdone as that sounds.

      Delete

Post a Comment

Thoughts?

Other Posts You Might Enjoy