open problem

From Stampy's Wiki
Open problem
open problem

Canonically answered

What is an "agent"?

Show your endorsement of this answer by giving it a stamp of approval!
A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15

More generally, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.[1]

There has been much discussion as to whether certain AGI designs can be made into mere tools or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are potentially dangerous, due to considerations such as basic AI drives possibly causing behavior which is in conflict with humanity's values.

In Dreams of Friendliness and in Reply to Holden on Tool AI, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.

See also

Posts

  1. Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.

Non-canonical answers

What are the "win conditions"/problems that need to be solved?

Show your endorsement of this answer by giving it a stamp of approval!

There is currently no clear win condition that most/all researchers agree on. Many researchers have their own paradigm and view the problem from a different angle.

In an ideal world, we would simultaneously solve

Here are some of the sub-fields on AI Safety research. We need to solve the challenges in many of these fields to win.

See also Concrete Problems in AI Safety.

Unanswered canonical questions