Home > artificial intelligence, software engineering > Introduction to intelligent agents

Introduction to intelligent agents

Properties of an intelligent agent

An intelligent agent (IA or agent) can be considered as an autonomous decision making system situated within some environment and capable of sensing and acting within its environment [7]. This environment may be populated with other agents, computational entities, or processes that interface with humans. Thus an agent is endowed with the properties of autonomy, reactivity, proactivity and social ability [3, 7].

Autonomy refers to an agent’s capacity to make decisions without direct intervention from humans or others. The environment within which an agent inhabits is repleted with stimuli, a subset of which the agent perceives and reacts to. The property of proactivity refers to an agent’s capacity for goal-directed behaviour, undertaken on its own initiative. An agent’s social ability is its communication with other agents and possibly humans via some protocol in order to engage in cooperative problem solving, negotiation or other social activities.

To conceptualize and reason about agents, we consider agents as intentional systems that have beliefs, desires, and intentions [7]. The belief base of an agent can be specified in terms of some logic say a multi-modal, higher-order logic [4] that admits of rules for reasoning about the agent’s beliefs.

Agents with artificial intelligence

Artificial intelligence techniques can be used in designing and implementing agents [1]. The level of intelligence that an agent requires depends on its degree of autonomy, mobility, longevity, and the level of uncertainty of its problem domain or within its environment. An agent that is required to deal with a wide range of situations needs to have a sufficiently broad knowledge base and a flexible inference engine. For mobile agents, it is advantageous to have small code size, which places restrictions upon the size of the knowledge base and the inference engine. Information security can be an issue irrespective of an agent’s mobility. The if-then rules of an inference engine can be reverse engineered by observing its input and output, a technique that is not readily applicable to a neural network due to its black box nature.

As regards longevity, an agent that exists for a considerable amount of time within an environment might require some learning mechanism to improve its performance over time. An agent that is short-lived might not require such a mechanism to achieve its goal. Where a problem domain or environment exhibits a degree of uncertainty, an agent needs to be sufficiently flexible to handle the uncertainty. This requirement could be achieved by using techniques such as Bayesian networks, if-then rules with certainty factors, or fuzzy logic [5].

Why intelligent agents?

A motivating factor for introducing intelligent agents is to help human users understand and manage complexities arising from information and communications technology (ICT) [3]. As ICT systems grow in size and complexity, we require techniques for coping with this complexity. For example, problem solving can be delegated to an organization of agents, each of which is an autonomous computational entity that addresses an aspect of the problem under consideration. Such an organization of computational entities is composed of heterogeneous agents that communicate via a standard protocol such as the agent communication language (ACL) [2].

Applications of agents

Green et al. [3] survey a number of applications of intelligent agents: intelligent user interfaces, distributed agent problem solving, and mobile agents. An intelligent user interface is a user-centric system that adapts its behaviour in order to maximize the productivity of the current user’s interaction with the system. An agent comprising an intelligent user interface is thus expected to draw a substantial number of techniques from the discipline of artificial intelligence. In particular, such an agent is composed of a knowledge base, an inference engine, and a learning component.

In distributed agent problem solving, we implement a network of agents that solve a common problem and where communication within the network is accomplished via, say, ACL. Such a network of agents is called a multi-agent system (MAS). A MAS is thus a distributed problem solver in which agents communicate amongst themselves, coordinate their activities, and negotiate in the event of conflicts. A MAS can be organized in various manners; for example, the network of agents can be organized according to democratic principles, the organization can be hierarchical, or open ended like a bazaar.

In contrast to an agent that comprises a MAS, a mobile agent is endowed with the capacity to migrate among a heterogeneous network of computer systems. A mobile agent is essentially an intelligent agent inhabiting a mobile agent environment, and is comprised of the following models: agent model, life-cycle model, computational model, security model, communication model, and navigation model.

Not a silver bullet

In a sense, a software agent is a conceptual tool to assist us in managing the complexity of software engineering. Booch’s principles of abstraction, organization and modularity allow software engineers to conceptualize and decompose software systems in terms of manageable units. An agent embodies these three principles and others such as concurrency and social ability, but it is not a silver bullet to all software engineering problems.

Tveit [6] summarizes the potential pitfalls of agent-oriented software engineering under five categories: political, conceptual, analysis and design, agent-level, and society-level. The danger of political pitfalls arises when the concept of agents is oversold as the ultimate solution to the complexity of software engineering. Conceptual pitfalls arise when developers forget that agents are multithreaded software. Where developers ignore other related software engineering methodologies, this is referred to as the analysis and design pitfall. Agent-level pitfalls occur when developers endow agents with too much or too few artificial intelligence. Similarly, society-level pitfalls occur when developers make use of too many or too few agents in an agent oriented system.

References

  1. J. P. Bigus and J. Bigus. Constructing Intelligent Agents with Java: A Programmer’s Guide to Smarter Applications. John Wiley & Sons, New York, NY, USA, 1998.
  2. M. R. Genesereth and S. P. Ketchpel. Software agents. Communications of the ACM, 37(7), 1997.
  3. S. Green, L. Hurst, B. Nangle, P. Cunningham, F. Somers, and R. Evans. Software agents: A review. Technical Report TCS-CS-1997-06, Trinity College Dublin, May 1997.
  4. J. W. Lloyd and T. D. Sears. An architecture for rational agents. In M. Baldoni, U. Endriss, A. Omicini, and P. Torroni, editors, DALT, volume 3904 of Lecture Notes in Computer Science, pages 51-71. Springer, 2005.
  5. M. Negnevitsky. Artificial Intelligent: A Guide to Intelligent Systems. Pearson Education, Harlow, Essex, England, 2002.
  6. A. Tveit. A survey of agent-oriented software engineering. In First NTNU CSGSC, 2001.
  7. M. Wooldridge. Agent-based software engineering. IEEE Proceedings — Software, 144(1):26-37, 1997.
Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: