Alfred mason-fayle's question on Quantilizers
What about minimizers? (of costs and similar)
For instance I speculate that, a minimizer for 'total loss of agency for other rational agents' would avoid any action that was not reversible, as that would deprive others of the ability to take said action. Being total loss rather than net loss, would prevent it from increasing the agency of some to make up for a loss in other places.
|Asked by:||alfred mason-fayle
OriginWhere was this question originally asked
|YouTube (comment link)|
|On video:||Quantilizers: AI That Doesn't Try Too Hard|
|Asked on Discord?||Yes|