CS W4701 Artificial Intelligence

Similar documents
Artificial Intelligence

Chapter 2: Intelligent Agents

Chapter 2 Intelligent Agents

Introduction to Artificial Intelligence. Intelligent Agents

Intelligent Agents. Chapter 2. Chapter 2 1

Agents: Rationality (2)

Introduction to Artificial Intelligence

Measuring the Performance of an Agent

cs171 HW 1 - Solutions

INTELLIGENT AGENTS 2.1 INTRODUCTION 2.2 HOW AGENTS SHOULD ACT

A Review of Intelligent Agents

What is Artificial Intelligence?

Intelligent Agents. Based on An Introduction to MultiAgent Systems and slides by Michael Wooldridge

Course Outline Department of Computing Science Faculty of Science. COMP Applied Artificial Intelligence (3,1,0) Fall 2015

Robot coined by Karel Capek in a 1921 science-fiction Czech play

Test Plan Evaluation Model

Simulation-based traffic management for autonomous and connected vehicles

Commentary Drive Assessment

DRIVING AFTER STROKE STROKE FACT SHEET G

CSC384 Intro to Artificial Intelligence

Digital Systems Based on Principles and Applications of Electrical Engineering/Rizzoni (McGraw Hill

Lesson 2 - Force, Friction

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10

GUIDE TO ERP IMPLEMENTATIONS: WHAT YOU NEED TO CONSIDER

Proceedings of the 9th WSEAS International Conference on APPLIED COMPUTER SCIENCE

Introduction to Robotics. Vikram Kapila, Associate Professor, Mechanical Engineering

Definitions. A [non-living] physical agent that performs tasks by manipulating the physical world. Categories of robots

Contents Intelligent Agents

The fundamental question in economics is 2. Consumer Preferences

MS INFORMATION SHEET DRIVING WITH MS. How can MS affect my ability to drive? Simultaneous coordination of arms and legs necessary when changing gears.

6.099, Spring Semester, 2006 Assignment for Week 13 1

Tutorial 1. Introduction to robot

TomTom HAD story How TomTom enables Highly Automated Driving

360 feedback. Manager. Development Report. Sample Example. name: date:

What Is an Electric Motor? How Does a Rotation Sensor Work?

Hybrid System for Driver Assistance

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CAs and Turing Machines. The Basis for Universal Computation

Designing Behavior-Based Systems

OPERATOR S MANUAL WARNING

Manufacturing Analytics: Uncovering Secrets on Your Factory Floor

CIM Computer Integrated Manufacturing

Robotics. Chapter 25. Chapter 25 1

Quality Meets the CEO

Safe Street Crossing is the first class we offer addressing road safety and transportation choices. 2nd grade: Pedestrian Safety Unit

Truck Automation for the Ready Mixed Concrete Industry. Michael J. Hoagland (205) ext

LEGO ROBOTICS for YOUNG BEGINNERS

Getting the best from your 360 degree feedback

GEARBOX MONITOR MODEL NR USER MANUAL

2013 International Symposium on Green Manufacturing and Applications Honolulu, Hawaii

Ten steps to better requirements management.

2/26/2008. Sensors For Robotics. What is sensing? Why do robots need sensors? What is the angle of my arm? internal information

Outline Intrusion Detection CS 239 Security for Networks and System Software June 3, 2002

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Texas Virtual Driver Education Course Syllabus

Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15

Inclined Plane: Distance vs. Force

The Basics of Robot Mazes Teacher Notes

Financial services marketers: Are your sales aids actually aiding sales?

Problem-Based Group Activities for a Sensation & Perception Course. David S. Kreiner. University of Central Missouri

GOAL: Get a Drivers License

BACKING UP PROCEDURES FOR SCHOOL BUS DRIVERS

GANTRY ROBOTIC CELL FOR AUTOMATIC STORAGE AND RETREIVAL SYSTEM

Introduction CHAPTER 1

Advantages and Disadvantages of One Way Streets October 30, 2007

Formalizing Properties of Agents

English as a Second Language Podcast ESL Podcast 164 Seeing a Specialist

Introduction to Electronic Signals

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

Toyota Prius Hybrid Display System Usability Test Report

for the Bill Hanlon

Adaptive Cruise Control

Johari Window. Lesson Plan

Understanding Types of Assessment Within an RTI Framework

6. MEASURING EFFECTS OVERVIEW CHOOSE APPROPRIATE METRICS

The Trend Toward Convergence of Physical and Logical (Cyber) Security

EB Automotive Driver Assistance EB Assist Solutions. Damian Barnett Director Automotive Software June 5, 2015

Smart features like these are why Ford F-Series has been America s best-selling truck for 37 years and America s best-selling vehicle for 32 years

EMPOWERING YOURSELF AS A COMMITTEE MEMBER

ScottishPower Competency Based Recruitment Competency Guidelines External Candidate. pp ScottishPower [Pick the date]

Formal Verification and Linear-time Model Checking

Descartes Meditations. ? God exists I exist (as a thinking thing)

Easy Casino Profits. Congratulations!!

Three Theories of Individual Behavioral Decision-Making

Decision Making under Uncertainty

COMP 590: Artificial Intelligence

STEAM STUDENT SET: INVENTION LOG

By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate

Lectric Enterprises 5905 Sprucepine Drive Winston Salem, NC Telephone

Intelligent Flexible Automation

III. Applications of Force and Motion Concepts. Concept Review. Conflicting Contentions. 1. Airplane Drop 2. Moving Ball Toss 3. Galileo s Argument

What is Artificial Intelligence (AI)?

Acting humanly: The Turing test. Artificial Intelligence. Thinking humanly: Cognitive Science. Outline. What is AI?

REPUTATION MANAGEMENT SURVIVAL GUIDE. A BEGINNER S GUIDE for managing your online reputation to promote your local business.

Tips and Technology For Bosch Partners

Photoillustration: Harold A. Perry; photos: Jupiter Images

Arkansas State PIRC/ Center for Effective Parenting

PID Control. Proportional Integral Derivative (PID) Control. Matrix Multimedia 2011 MX009 - PID Control. by Ben Rowland, April 2011

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

AIBO as an Intelligent Robot Watchdog

Transcription:

CS W4701 Artificial Intelligence Fall 2013 Chapter 2: Intelligent Agents Jonathan Voris (based on slides by Sal Stolfo)

Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: Sensors: Eyes, ears, and other Actuators: Hands, legs, mouth, and other body Robotic agent: Sensors: Cameras and infrared range finders Actuators: Various motors Agents vs non-agents?

Agents and Environments The agent function maps from percept histories to actions: [f: P* A] The agent program runs on the physical architecture to produce the agent function Agent function: Mathematical abstraction Agent program: Concrete implementation Agent = architecture + program

Vacuum-cleaner world Percepts: location and contents e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp

Rational Agents Recall: Rational = Doing the right thing Okay, but what s that? What are the consequences of the agent s actions on the state of its environment? Sequence of agent actions -> sequence of environmental states How to control agent? Introduce performance measure An objective criterion for success of an agent's behavior Possible performance measures for vacuum world? Evaluate performance of environment or that of agent?

Rational Agents Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Rational Agents Rationality is relative to a performance measure Once you have an agent, judge rationality based on: Performance measure Environmental knowledge Possible actions Sensors and accuracy

Rational Agents Rationality is distinct from omniscience Optimizing expected performance vs observed performance Only relying on observations so far Agents can perform actions in order to modify future percepts so as to obtain useful information Information gathering: actions which intend to modify future precepts Exploration An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) Agent street smarts vs book smarts

Environments Four elements of a task environment: PEAS Performance measure Environment Actuators Sensors Must first specify the setting for intelligent agent design

Environments Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

Environments Agent: Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Environments Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

Environments Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard

Environment types Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Next episode not affected by actions in previous episode

Environment types Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment.

Environment types Chess with Chess without Taxi driving a clock a clock Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Agent Functions and Programs An agent is completely specified by the agent function mapping percept sequences to actions Describes agent s behavior Job of AI: design an agent program which implements the agent function Describes how the agent works Program must suit architecture Inputs: Agent Program: Current percept Agent Function: Percept history

Agent Functions and Programs One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely

Table Lookup Agent Drawbacks: Huge table Take a long time to build the table No autonomy Even with learning, need a long time to learn the table entries

Table Lookup Agent Why a chess table agent is a bad idea: Not enough space in universe to store table Would take longer than your life to build Too massive to be learned through observation Where do you start? Reminder: The fact that a program can find a solution in principle does not mean that the program contains any of the mechanisms needed to find it in practice

Simple Reflex Agents Forget the precept history! Can t we just act on the current state?

Simple Reflex Agents

Simple Reflex Agents Pros: Simple Cons: For this to produce rational agents the environment must be? Often yield infinite loops Randomization can help but only so much

Model-based Reflex Agents That didn t turn out so well Let s try keeping track of what we can t currently observe Internal state to track Is that car braking? Is there a car in my blind spot?? Where did I park???

Model-based Reflex Agents

Model-based Reflex Agents Getting better! But sometimes model is insufficient: Impossible to model hidden world accurately and reliably What to do when environment doesn t imply action? i.e., fork in the road

Goal-based agents Let s introduce a goal concept Describe desirable situations Agent will account for the future

Goal-based agents

Utility-based Agents Goals are binary Did you meet it or not? What about a relative measure of achievement? Economists named this utility Utility function: an agent s internal performance measure An agent is rational if its utility function matches its environments performance measure

Utility-based Agents

Utility-based Agents Utility function is not always needed to be rational But rational agents must act like they have one Needed when: Goals conflict Uncertain goals

Learning Agents That s nice but how do you program these things? By hand? Turing says: That s for chumps Let the agent figure things out by itself! Learning element Makes improvements to performance element Performance element Selects actions Previously this was the whole agent

Learning Agents New components: Learning element Makes improvements to performance element Performance element Selects actions Previously this was the whole agent Critic Gives performance feedback to learning element Needed because precepts don t capture performance Problem generator Suggests innovative actions

Learning Agents