As long as artificial intelligence can be turned off and restarted, there are no moral questions to ask. If the time comes when this intelligence can no longer be restarted to its original state after it is turned off, then it becomes an ethical issue.. because that means that this intelligence is self-aware and self-sustaining already.
Rainforests being burned down is an ethical issue because once it is destroyed it can no longer be returned to its original state.
AI characters being killed is not an issue because every event can be restored.
This issue is not about whether AI can actually "feel". That's beyond our ability to comprehend because "feeling" is subject to whatever is the observer. For example, who knows if a tree doesn't "feel" pain when it is being cut? We only assume it doesn't "feel" because our notion of "feeling" is our own alone and we don't detect any signs based on our expectations.
Rather, this issue is simply the restorability of something impacted by our actions. Melting polar ice caps is an ethical issue because we can't bring it back to exactly the way it is without affecting some other environmental issue. Killing an AI character, at this moment, can always be reversed.
So in conclusion, it is only when actions done to an AI become irreversible that the issue becomes ethical.







