Which moral theories would be easiest to encode into an AI?
There are three major approaches to normative ethics (and some approaches to unify two or all of them): Virtue ethics, deontological ethics, and consequentialist ethics.
Virtue ethicists believe that at the core, leading an ethical life means cultivating virtues. In other words: What counts is less what one does moment-to-moment, but that one makes an effort to become the kind of person who habitually acts appropriately in all kinds of different situations. A prominent example for virtue ethics is stoicism.
Deontological ethicists believe that an ethical life is all about following certain behavioral rules, regardless of the consequences. Prominent examples include the ten commandments in Christianity, Kant's "categorical imperative" in philosophy, or Asimov's Three Laws of Robotics in science fiction.
Consequentialist ethicists believe that neither one's character nor the rules one lives by are what makes actions good or bad. Instead, consequentialists believe that only the consequences of an action count, both direct and indirect ones. A prominent example of consequentialist ethics is utilitarianism: The notion that those actions are the most moral that lead to the greatest good for the greatest number of individuals.
The short answer to the question which one of these might be the easiest to encode into an AI is: "We don't know." However, reinforcement learning agents
A system that can be understood as taking actions towards achieving a goal.
On the other hand, if AGI
An AI model that takes in some text and predicts how the text is most likely to continue.
It’s worth noting that the ease with which we can encode these theories in an AI should not be the only criterion to choose which theory to use.