2 UNIT – V Learning from Examples: Forms of Learning, Supervised Learning, Learning Decision Trees, Evaluating and Choosing the Best Hypothesis, The Theory of Learning, Regression and Classification with Linear Models, Artificial Neural Networks. Expert Systems Architectures: Introduction, Rule Based System Architecture, NonProduction System Architecture, Dealing with uncertainty, Knowledge Acquisition and Validation, Knowledge System Building Tools. Outcomes: At the end of the course, students should be able to: 1. Understand foundation and basic concepts of AI and Intelligent Agents. 2. Evaluate Searching techniques for problem solving in AI. 3. Apply First-order Logic and chaining techniques for problem solving. 4. Handle knowledge representation techniques for problem solving. 5. Apply supervised learning and Neural Networks for solving problem in AI. TEXT BOOK: 1. Artificial Intelligence: A Modern Approach, 3rd edition, Pearson , Russel S, Norvig P, Education, 2010. 2. Introduction to Artificial Intelligence and Expert Systems, Dan W. Patterson ,PHI, New Delhi, 2006. REFERENCE BOOKS: 1. Artificial Intelligence, 3rd edition, Rich E, Knight K, Nair S B, Tata McGraw-Hill, 2009. 2. Artificial Intelligence: Structures and Strategies for Complex problem solving, 6th edition, Luger George F, Pearson Education, 2009 3. Minds and Computers An Introduction to the Philosophy of Artificial Intelligence, Carter M,Edinburgh University Press, 2007.
3 UNIT – I Introduction: 1.1 1.2 1.3 1.4 What Is AI, the Foundations of Artificial Intelligence, The History of Artificial Intelligence, The State of the Art Intelligent Agents: 1.5 1.6 1.7 1.8 Agents and Environments, Good Behavior: The Concept of Rationality, The Nature of Environments, And The Structure of Agents
4 1.1 WHAT IS AI? The eight definitions of AI, laid out along two dimensions. (i) (ii) The definitions on top are concerned with thought processes and reasoning, the second one on the bottom address behavior. The definitions on the left measure success in terms of fidelity to human performance, whereas the ones on the right measure against an ideal performance measure, called rationality. A system is rational if it does the "right thing," given what it knows. 1.1.1 Acting humanly: The Turing Test approach The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer. The computer would need to possess the following capabilities: • natural language processing (NLP) to enable it to communicate successfully in English; • knowledge representation to store what it knows or hears;
5 • automated reasoning to use the stored information to answer questions and to draw new conclusions; • machine learning to adapt to new circumstances and to detect and extrapolate patterns. The so-called total Turing Test includes a video signal so that the interrogator can test the subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical objects "through the hatch." To pass the total Turing Test, the computer will need • computer vision to perceive objects, and • robotics to manipulate objects and move about. These six disciplines compose most of Al, and Turing deserves credit for designing a test that remains relevant 60 years later. 1.1.2 Thinking humanly: The cognitive modeling approach If we are going to say that a given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual workings of human minds. There are three ways to do this: (i) (ii) (iii) through introspection—trying to catch our own thoughts as they go by; through psychological experiments—observing a person in action; and through brain imaging—observing the brain in action. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program. If the program's input—output behavior matches corresponding human behavior, that is evidence that some of the program's mechanisms could also be operating in humans. For example, Allen Newell and Herbert Simon, who developed GPS, the "General Problem Solver" (Newell and Simon, 1961) were not content merely to have their program solve problems correctly. They were more concerned with comparing the trace of its reasoning steps to traces of human subjects solving the same problems. The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind.