If you can dream it, you can do it.
--Your friends at LectureNotes

Note for Artificial Intelligence - AI By ANNA SUPERKINGS

  • Artificial Intelligence - AI
  • Note
  • Information Technology Engineering
  • 2 Topics
  • 282 Offline Downloads
  • Uploaded 1 year ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-2

We use the term performance measure for the how—the criteria that determine how MEASURE successful an agent is. Obviously, there is not one fixed measure suitable for all agents agent program: a function that implements the agent mapping from percepts to actions. We assume this program will run on some sort of ARCHITECTURE( computing device) . Hence A Agent is a combination of Architecture and Program. agent = architecture + program The Skeleton of an agent is function SKELETON-AGENT( percept) returns action static: memory, the agent’s memory of the world memory <-UPDATE-MEMORY(memory, percept) action <-CHOOSE-BEST-ACTION(memory) memory <-UPDATE-MEMORY(memory, action) return action A Table driven Agent function: function TABLE-DRIVEN-AGENT( percept) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action <-LOOKUP( percepts, table) return action Four types of agent program: 1. Simple reflex agents 2. Agents that keep track of the world 3. Goal-based agents 4. Utility-based agents Simple reflex agents: Structure of a simple reflex agent in schematic form, showing how the condition–action rules allow the agent to make the connection from percept to action. CS6659 & Artificial Intelligence Unit I Page 2

Text from page-3

function SIMPLE-REFLEX-AGENT( percept) returns action static: rules, a set of condition-action rules state <-INTERPRET-INPUT( percept) rule <-RULE-MATCH(state, rules) action <-RULE-ACTION[rule] return action CS6659 & Artificial Intelligence Unit I Page 3

Text from page-4

2.Agents that keep track of the world(Reflex Model) The simple reflex agent described before will work only if the correct decision can be made on the basis of the current percept. In order to choose an action sometimes INTERNAL STATE will help to take good decision. function REFLEX-AGENT-WITH-STATE( percept) returns action static: state, a description of the current world state rules, a set of condition-action rules state <-UPDATE-STATE(state, percept) rule <-RULE-MATCH(state, rules) action <-RULE-ACTION[rule] state <-UPDATE-STATE(state, action) return action 3. Goal-based agents Knowing about the current state of the environment is not always enough to decide what to do. In other words, as well as a current state description, GOAL the agent needs some sort of goal information, which describes situations that are desirable. 4. Utility Based Model: Goals alone are not really enough to generate high-quality behavior. For example, there are many action sequences that will get the taxi to its destination, thereby achieving the goal, but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude distinction between “happy” and “unhappy” states, whereas a more general performance measure should allow a comparison of different world states. Then it has higher utility for the agent CS6659 & Artificial Intelligence Unit I Page 4

Text from page-5

Search: Formulate a problem as a state space search by showing the legal problem states, the legal operators, and the initial and goal states . • A state is defined by the specification of the values of all attributes of interest in the world • An operator changes one state into the other; it has a precondition which is the value of certain attributes prior to the application of the operator, and a set of effects, which are the attributes altered by the operator • The initial state is where you start • The goal state is the partial description of the solution State Space Search Notations The set of notations involved in the state space search is : 1) An initial state is the description of the starting configuration of the agent 2) An action or an operator takes the agent from one state to another state which is called a successor state. A state can have a number of successor states. 3) A plan is a sequence of actions. The cost of a plan is referred to as the path cost. Problem formulation & Problem Definition Problem formulation means choosing a relevant set of states to consider, and a feasible set of operators for moving from one state to another. Search is the process of considering various possible sequences of operators applied to the initial state, and finding out a sequence which culminates in a goal state. A search problem consists of the following: • S: the full set of states • s0 : the initial state • A:S→S is a set of operators • G is the set of final states. Note that G ⊆S The search problem is to find a sequence of actions which transforms the agent from the initial state to a goal state g∈G. CS6659 & Artificial Intelligence Unit I Page 5

Lecture Notes