Previous

Notes on the practicalities of a cyberwarfare artificial intelligence

Posted on Thu Aug 16th, 2018 @ 1:24am by Lieutenant JG Rizena Virodin PhD

Dear Meilin,
I regret that I never had time to fully explore this with you. I had intended to complete a more in depth analysis, but time was limited. I look forward to hearing about any follow-up you might accomplish.
Yours,
Rizena



Notes on the practicalities of a cyberwarfare artificial intelligence

• Rizena is Bolian, not human. Different cultural background, including outlook on AI. Important to acknowledge this as a possible bias.

• M-5 a concern, but possibly an outlier. Cmdr Data, Voyager’s EMH, examples of sapient machine intelligences with good working relationships with Starfleet and the Federation.

• Moriarty, not so much.

• In case of spread of moralities, fall back on standard Starfleet protocols: assume peaceful and cooperative intent until proven otherwise.
M-5 had ability to “travel” electronically via Comms. Data, EMH, Moriarty were all confined to a body of some sort and interacted physically. Is that significant? Could the Palatine be considered a body? Does Starfleet have the ethical right to force such a being to remain in the ship?

• EMH provides good precedent here: has a physical body that's not the ship (even if it's still holomatter) but existed at least partly in the ship’s databases. A holoprogram that could conceptualise the ship’s systems in the same way as the EMH could work. Can leave the ship via remote emitter technology/other holoemitters.

• However, not an existing intellect, but a created one.

• Is it slavery to create a being for a specific task? What if they don't want to do that task?

• A gray area. Presumably, the being would find fulfilment in task because of programming. As creators, we would have the ethical obligation to help the being grow beyond its original programming, if it wanted.

• If sapience not desired, what constraint would be provided to prevent it? Both EMH and Moriarty were originally intended to be non-sapient, yet spontaneously arose. Is such mind cauterising ethical?

• Practical considerations of actual performance.

• Potential for violation of privacy - AI could access all files. Limiting access limits effectiveness. Command *also* have access to all files in certain situations, protocols probably cover.

• Potential to be compromised - if AI influenced socially (coercion, blackmail, etc) or hacked and controlled, would be more effective than biological security agents at controlling our own systems. Earlier back-up or siblings to counter might be the answer.

• Conclusion: There are risks to ourselves and the galaxy at large, however these can be mitigated through careful management.

• Additionally, there are ethical concerns with how we treat such an intelligence, that again can be bypassed through consideration and forethought.

• Practical issues exist, but can largely be countered with standard security operating protocols.

 

Previous

labels_subscribe