Adelar Scheidt's question on Avoiding Negative Side Effects
Hey Robert, don't you think those problems disappear when an AGI learns through neural networks? The same way we just "know" stepping on the baby is bad, not by manually assiging a value to it, but because we persistently strenghtened the HUMAN - HARM - BAD - DON'T network. You know? There is a value, but it isn't assigned. And instead of assigning "not care" for unspecified variables, maybe the network has a way of grouping families of events based on the nature of outcomes it has previously learned, pretty much like a human brain does. We abstract real-world situations and apply the same principles we learned in completely new situations, which surely isn't always perfect, it's only good enough to secure the species. But why wouldn't an AGI be able to do the same?
|Asked by:||Adelar Scheidt
OriginWhere was this question originally asked
|YouTube (comment link)|
|On video:||Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1|
|Asked on Discord?||Yes|