Intelligent Agents and Multiagent Systems

aelbereth

Witam
Poniższe informacje opisują jedną z ważniejszych technologii wchodzących w skład tzw. Sieci Semantycznych (patrz artykuł Sieci Semantyczne - WWW następnej generacji zamieszczony w kategorii Z pogranicza)

Gwoli wyjaśnienia
Podejrzewam, że bede strasznie atakowany za wersje w języku angielskim. Jest pare powodów dlaczego jednak nie tłumaczyłem: 1) zamieścilem rozdział mojej pracy mgr, którą piszę po angielsku, 2) uważam że każda osoba, która uważa się za programistę i informatyka powinna znać angielski, 3)szkoda czasu na tlumaczenie i spolszczanie albo bawienie sie w slowotworstwo, 4) tym bardziej ze temat nie jest lekki i podejrzewam ze niewielu go przeczyta a co dopiero zrozumie, ale w koncu trzeba cos ambitniejszego zamiescic a nie podstawy podstaw podstaw;)

Intelligent Agents and Multiagent Systems

Current state of computing
Use of computers and software becomes more ubiquitious - computers help people in many disciplines: communication, industry, banking, shopping and entertainment, not to mention academic research and scientific purposes. In order to provide all aforementioned amenities computer programs become more sophisticated and complicated. More often architectures are interconnected, share some notion of intelligence, delegation and human-orientation. In order to achieve such characteristics, programmers have to concentrate on different problems than before. Therefore method of creating software progressed from: sub-routines, procedures & functions, abstract data types into objects. Because software becomes available not only for scientific purposes but as any commercial product for wide area of clients, there is a need to build software as easy to use as possible. Users don’t need to know how does the system functions (in technical aspect) and how it performs given tasks. It should automate complicated and time consuming tasks, minimising human interaction. In many complicated situations it is also desirable to enable such programs to make autonomous and intelligent decisions on behalf of user, making it much more independent and flexible. In order to achieve that, programs should be intelligent, autonomous, cooperative and adaptive. It seems that another step in software engineering that leads to this notion is succesor of object oriented programming called intelligent agent.

Agent definition
An intelligent agent is a computer system capable of flexible, autonomous action on behalf of its user or owner. The main point about agents is they are autonomous: capable of acting independently, exhibiting control over their internal state. Therefore an agent should be capable of flexible autonomous action in some environment.
On of the most important charactersistics of agent is its flexibility. Agents are situated in specific envronement and therefore should adapt and control part of it. According to N. Jennings and J. Wooldridge flexible agents are:
– Reactive,
– Pro-active,
– Social,

Agents characterisitics

Environment types
A system is a pair containing an agent and an environment. Each agent (or any software program) exists and interacts with specific type of environment. Depending on the environment type and its characteristics, agent has to adapt different strategies in order to succesfully operate and achieve its goals. Real world and internet may be described as highly inaccesible, non-deterministic and dynamic environments. Navigating internet resources agent is not able to obtain complete, accurate up-to-date information about the environment’s state. When performing actions agent can not assume that it would have a single guaranteed effect - there is no one certain state that will result from performing an action and furthermore internet is not a static environment that can be assumed to remain unchanged except by the performance of actions done by the agent. It is highly dynamic environment that has other processes operating on it, and which hence changes in ways beyond the agent’s control.

Reactivity
If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure - program just executes blindly. Example of fixed environment: compiler. In the real world or internet things change, information is incomplete. Many interesting environments are dynamic. Software is hard to build for dynamic domains: program must take into account possibility of failure – and therefore even verify it it is worth executing A reactive system is one that maintains an ongoing interaction with its environment and responds to changes that occur in it (in time for the response to be useful). Agent rectivity is highly dependent on type of environement. The more udpredictable it is, the more difficult it is to build agent operating in it.

Proactiveness
Reacting to an environment is easy (stimulus response rules). But agents are designed to do things for clients. Their reactions are predefined by tasks they have to accomplish - goal directed behaviour. Therefore agent should not only interact with environment and respond to occuring changes (passive behaviour), but realize their agenda - generate and attemp to achieve goals. They should recognise opportunities and take the initiative.

Social Ability
The real world is a multi-agent environment – it is impossible (or very difficult) to achieve goals without taking others into account. Some goals can only be achieved with the cooperation of others. Similarly for many computer environments - internet. Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others.

Other properties, sometimes discussed in the context of agency:

  • mobility: the ability of an agent to move around an electronic network,
  • veracity: an agent will not knowingly communicate false information,
  • benevolence: agents do not have conflicting goals, and that every agent will therefore
    always try to do what is asked of it,
  • rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit,
  • learning/adaption: agents improve performance over time,

Agents and other mainstream computing disciplines
Agents may be described as components that inherit their characteristics from various different computing disciplines. Most important are: Object Oriented Programming, Expert Systems, AI (Artificial Intelligence), Intentional and Post-declarative Systems. While it is safe to say that agents inherit some behaviours from aforementioned disciplines, it’s not appropriate to conclude it is only a mix of them.

Agents and Objects
Agents resemble very similar architecture to objects – they encapsulate some state, communicate via message passing and have methods that perform operations. These similarities are based only on architectural point of view. Comparing both paradigms, agents differ in following areas:
– agents are autonomous: agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent,
– agents are smart: capable of flexible (reactive, pro-active, social) behaviour, and the standard object model has nothing to say about such types of behavior;
– agents are active: a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control,
Objects do it for free. . .agents do it because they want to;agents do it for money.

Agents and Expert Systems
Expert systems are used to provide expertise about some (abstract) domain of discourse (e.g., health service, blood diseases). Example expert system: MYCIN knows about blood diseases in humans. It has a wealth of knowledge about blood diseases, in the form of rules. A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries.

Main differences:
Agents (co)exist in an environment. For example considering agents in internet network - they can change their location, move from one point of network to other, interact with other agents and programs.
Expert system is not aware of the world – only information obtained is by asking the user questions.
Agents perform actions that change the state of the world. For example agent may book a ticket, perform money transfer or buy a book, whereas expert system is only information purpose system not aware of surrounding environment (but holding its explicit representation inside itself).
Some real-time (typically process control) expert systems are agents.

Agents and AI
AI aims to build systems that can (ultimately) understand natural language, recognise and understand scenes, use common sense, think creatively, etc — all of which are very hard. Agent is a system that can choose the right action to perform, typically in a limited domain. It is not necessary to solve all the problems of AI to build a useful agent: a little intelligence goes a long way… Oren Etzioni, speaking about the commercial experience of NETBOT, Inc: We made our agents dumber and dumber and dumber . . . until finally they made money.

Agents as Intentional SystemsTo exaplain complicated behaviour of computer systems or even human activity it is usefull to describe it using beliefs, desires and intentions. This way it is relatively easy to explain even very difficult processes without describing it in details (which is often very hard considering complicated system). As example consider following sentence:

Dorian worked hard because he wanted to earn more money.
Carol took her mobile because she believed that someone will call her.

In the above examples human behaviour is predicted and explained through the attribution of attitudes, such as believing and wanting. The attitudes employed in such folk psychological descriptions are called the intentional notions. The philosopher Daniel Dennett coined the term intentional system to describe entities whose behaviour can be predicted by the method of attributing belief, desires and rational acumen. Dennett identifies different ‘grades’ of intentional system:

‘A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. . . . A second-order intentional system is more sophisticated - it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) - both those of others and its own’.

McCarthy argued that there are occasions when the intentional stance is appropriate:
‘To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known’.

The more is known about a system, the less is need to rely on animistic, intentional explanations of its behaviour. But with very complex systems, a mechanistic or deatiled explanation of its behaviour may not be practicable. As computer systems become more complex, there is a need for more powerful abstractions and metaphors to explain their operation - low level explanations become impractical. The intentional stance is such an abstraction. The intentional notions are thus abstraction tools, which provide developers with a convenient and familiar way of describing, explaining, and predicting the behaviour of complex systems. Most important developments in computing are based on new abstractions:

– procedural abstraction,
– abstract data types,
– objects,

Agents, and agents as intentional systems, represent a further, and increasingly powerful abstraction. Therefore agent theorists start from the (strong) view of agents as intentional systems - one whose simplest consistent description requires the intentional stance. Now, much of computer science is concerned with looking for abstraction mechanisms.

Post-Declarative Systems
This view of agents leads to a kind of post-declarative programming:
– in procedural programming system is told exactly what to do,
– in declarative programming system is given the general info about the relationships between objects, and a built-in control mechanism (e.g. goal-directed theorem proving) figures out what to do,
– with agents, a very abstract specification of the system is given, so that the control mechanism has to figure out what to do, knowing that it will act in accordance with some built-in theory of agency (e.g., the well-known Cohen-Levesque model of intention).

Agent architecture

Symbolic/logical architecture
In order to make decisions agents may use explicit logical reasoning in order to decide what to do depending on current situation. Because agents use data model to reason about surronunding environment, this paradigm is known as symbolic AI.

Deliberative agent is a computer program that:

  • contains an explicitly represented, symbolic model of the world,
  • makes decisions (for example about what actions to perform) using symbolic reasoning,

This architecture assumes that agent holds represenatation of important information as an internal symbolic model. Based on that model agent can use various AI techniques to reason about that data. Because of that such agents are very similar to knowledge-based systems (expert systems) and therefore contain associated methodologies and limitations of such systems.

Symbolic AI problems
The transduction problem
Because in order to reason about surrounding world, agent needs information presented as symbols, problem arises how to translate the real world into symbolic description that is accurate and adequate in finite time for that description to be useful.

The representation/reasoning problem
Another problem is about how to symbolically represent information about complex real-world entities and processes, and how to get agents to reason with this information in time for the results
to be useful. Becaue the real world in most cases is very complicated and unpredictable environment, it is hard to create symbolic model of that environment expressive enough and also possible to perform automated reasoning and planning in finite time.

Reactive architecture
Because there are many unsolved problems associated with symbolic AI, many researchers shifted from symbolic architecture into rective one. Previous architecture assumed that agent holds explicit representation of surrounding world inside himslef and makes decisions about the world using that model and various reasoning techniques. This idea presented agent as a disembodied system separated from existing environment. Agent was unable to understand world – he was separated from it and created his own representation (symbolic model) of that world. One of the most vocal critics of mainstream AI (symbolic architecutre) - Rodney Brooks argumented that:

  • Situatedness and embodiment: ‘Real’ intelligence is situated in the world, not in disembodied systems such as theorem provers or expert systems.
  • Intelligence and emergence: ‘Intelligent’ behaviour arises as a result of an agent’s interaction with its environment. Also, intelligence is not an innate, isolated property.

Therefore agent should be composed of hierarchy of task-accomplishing behaviours described as subsumption architecture. Steels’ Mars explorer system, using the subsumption architecture, achieves near-optimal cooperative performance in simulated ‘rock gathering on Mars’ domain: The objective is to explore a distant planet, and in particular, to collect sample of a precious rock. The location of the samples is not known in advance, but it is known that they tend to be clustered. Implemented as a set of situation-action rules (behaviours). Each behaviour ‘competes’ with others to exercise control over the agent. Lower layers represent more primitive kinds of behaviour, and have precedence over layers further up the hierarchy. Rective systems are, in terms of the amount of computation they do, extremely simple. Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems.

Hybrid architecture
Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents. They have suggested using hybrid systems, which attempt to marry classical and alternative approaches. An obvious approach is to build an agent out of two (or more) subsystems:

– deliberative - containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI; and
– reactive - which is capable of reacting to events without complex reasoning. Often, the reactive component is given some kind of precedence over the deliberative one.

This kind of structuring leads naturally to the idea of a layered architecture. In such architecture, an agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with
information at increasing levels of abstraction.

Planning agents
Agents are build with the aim of carrying out tasks for users. The task must be specified by client, but as abstract and simple as it could be. Main idea is to tell agents what to do without telling them how to do it. Therefore the main burden of reasoning how to accomplish the task is shifted to agent. In order to achieve that agent has to utilise AI Planning techniques, defined as automatic programming - the design of a course of an action that will achieve some desired goal. Within the symbolic AI community, it has long been assumed that some form of AI planning system will be a central component of any artificial agent. Building largely on the early work of Fikes & Nilsson, many planning algorithms have been proposed, and the theory of planning has been well-developed.

Means-Ends reasoning
What is Means-Ends Reasoning?
Basic idea is to give an agent:
– representation of goal/intention to achieve,
– representation actions it can perform,
– representation of the environment,

and have it generate a plan to achieve the goal. Essentially, this is automatic programming.

Many questions arise how to represent previously desribed information to agent. Having given such information, agent generates available options (set of possible alternatives) and finally chooses between them and commit to some. Chosen options are then intentions.

The following commitment strategies are commonly discussed in the literature of rational agents:
– Blind commitment A blindly committed agent will continue to maintain an intention until it believes the intention has actually been achieved. Blind commitment is also sometimes referred to as fanatical commitment.
– Single-minded commitment A single-minded agent will continue to maintain an intention until it believes that either the intention has been achieved, or else that it is no longer possible to achieve the intention.
– Open-minded commitment An open-minded agent will maintain an intention as long as it is still believed possible. An agent has commitment both to ends (i.e., the state of affairs it wishes to bring about), and means (i.e., the mechanism via which the agent wishes to achieve the state of affairs).

[size=12]Multiagent Systems[/size]
A Multi-agent System (MAS) is one that consists of a number of agents, that interact with each other. To successfully interact, agents need to cooperate, coordinate and negotiate. Agents in a multi-agent system[3] are characterised by abstraction, interoperability, modularity and dynamism. These qualities are particularly useful in that they can help to promote open systems, which are typically dynamic, unpredictable, and highly heterogeneous, as is the Internet. Creation of intelligent agents in distributed and open environments like Web may solve many restrictions and problems characterising standard methodologies. Described technology might be used for automatic creation of virtual communities of heterogeneous agents that dynamically collaborate, compete, form teams or coalitions, enter into auctions and negotiate on prices or services.
Although main concept and possibilities provided by multiagent architectures may be tempting and interesting, it is not enough to implement few agents and wait for them to spring into action and create a community. Such systems provide new solution for complex problems, but also need more sophisticated infrastructure services to support it.

Modularity and abstraction
For simple problems it may be enough to provide single agent architecture. But this solution may be optimal only in cases where agent operates in predictable and reasonalbly small environement and is equipped with all needed reasoning techniques. In situations where agent has to solve complex, time consuming, distributed problems and operate in a highly dynamic environment where resouces and also other agents may be activated and deactivated at any unpredictable time, there is a need for more sophisticated solution. Nowadays AI has matured, and it endeavors to attack more complex, realistic, and large-scale problems. Such problems are beyond the capabilities of an individual agent. The capacity of an intelligent agent is limited by its knowledge, its computing resources, and its perspective. This bounded rationality is one of the underlying reasons for creating problem-solving organizations. The most powerful tools for handling complexity are modularity and abstraction. Multiagent Systems (MASs) offer modularity. If a problem domain is particularly complex, large, or unpredictable, then the only way it can reasonably be addressed is to develop a number of functionally specific and (nearly) modular components (agents) that are specialised at solving a particular aspect of general problem. This decomposition allows each agent to use the most appropriate paradigm for solving its particular problem. When interdependent problems arise, the agents in the system must coordinate with one another to ensure that interdependencies are properly managed. Furthermore, real problems involve distributed, open systems (Hewitt 1986).

Open systems
An open system is one in which the structure of the system itself is dynamically changing. The characteristics of such a system are that its components are not known in advance. It can change over time and can consist of highly heterogeneous agents implemented by different people, at different times, with different software tools and techniques. Best-known example of a highly open software environment is the internet. In the area of Multiagent Systems same description applies since new agents may change their location (clone or move between agent systems – if they are compliant with FIPA standards – Foundation for Intelligent Physical Agents) or new agents could be added to existing infrastructure. The internet can also be viewed as a large, distributed information resource, with nodes on the network designed and implemented by different organizations and individuals. In an open environment information sources, communication links, and agents could appear and disappear unexpectedly. Within a multi-agent system, agents represent their view of the world by explicitly defined ontologies. The interoperability of such a multi-agent system is achieved through the reconciliation of these views by a commitment to common ontologies that permit agents to interoperate and cooperate.

Types of MAS agents
MAS may be described as intentional system that receives requests and returns results to client. Looking inside this architecture it consists of autonomous components that interact with each other in order to achieve personal and common goals. Depending on type of problem - each agent may represent part of the solution and therefore the only way to solve whole problem is to communicate and cooperate with other agents. From this point of view multiagent system may be described as a society of agents. In an Evening Planner architecture (see chapter x.x) when a collection of personal agents gather to schedule a meeting between their users, they pursue a common goal and intelligent group behaviour emerges (see Kautz, Selman, and Coen 1994 for a similar situation). When scheduling is complete, agents disperse, perhaps never to gather again in this same grouping. Even considering only this situation, it is easy to see that due to interaction with each other (without centralized part of the system) agents are capable to achieve their and community goals and intelligent behaviour arises. Although there is no central point in this architecture (agents might be distributed among different physical locations and may be as much relevant as any other – since they depend on each other), there should be distinct division between components that perform specific roles in this community. Therefore at the conceptual level MAS may be divided into three general agent categories:

  • service providers,
  • service requesters,
  • middle agents,

Service providers provide some type of service, such as finding information, or performing some particular domain specific problem solving. Requester agents need provider agents to perform some service for them. Agents that help locate others are called middle agents. Matchmaking is the process of finding an appropriate provider for a requester through a middle agent, and has the following general form:

– provider agents advertise their capabilities to middle agents,
– middle agents store these advertisements,
– a requester asks some middle agent whether it knows of providers with desired capabilities,
– the middle agent matches the request against the stored advertisements and returns the result, a subset of the stored advertisements,

While this process at first glance seems very simple, it is complicated by the fact that not only local information sources, but even providers and requesters on the Web, are usually heterogeneous and incapable of understanding each other. Inside agents community hosted by one system (or based on the same architecture) agents may easily communicate using specific agent languages and ontologies. Also capability advertisement and matching is very easy. However, between different agent systems, ontologies are developed and maintained independently of each other. Thus two agent systems may use different ontologies to represent their views of the domain. This is often referred to as an ontology mismatch. In such a situation, interoperability between agents is based on the reconciliation of their heterogeneous views, as carried out in industry-driven coalitions like W3C.

From the functional point of view MAS may be considered as a set of three different groups of agents, performing specific actions:

  • interface agents – that communicate with client and represent client interests in the MAS,
  • task agents – used by interface agents to solve complicated problems (service oriented agents),
  • information agents – provide needed data and information to other agents,

Benevolent and self-interested agents
If the MAS, where agents exist, was created in order to solve specific problem or agent designer has full control over the system, it is possible to design agents to help each other whenever asked. In this case agents are benevolent: client’s best interest is their best interest. Problem-solving in benevolent systems is cooperative distributed problem solving (CDPS). Benevolence simplifies the system design task enormously.
If agents represent individuals or organisations (the more general case), then it is not proper to make the benevolence assumption. Agents will be assumed to act on behalf of their own interests, possibly at expense of others. Potential for conflict and may complicate the design task enormously.
In many cases MAS may be composed of both types of agents, where self-interested agents communicate and cooperate with benvolent agents but negotiate with other self-interested agents.
This type of mixed architecture is implemented in Evening Planner Multiagent System that consists of both – client specific agents that represent clients in the system and therefore are self-interested (always act on behalf of their clients) and task agents that are characterized by benvolece and ability to cooperate with client agents to solve their problems. When there is a need to set up common meeting (see x.x) time – self interested agents negotiate between themselves (depending on clients schedule) and hopefully come to mutually acceptable agreement due to negotiation techniques.

Cooperative distributed problem solving
When agent inside MAS realises it cannot achieve goal in isolation or prefers not to achieve it alone (due to solution quality or deadline) it may use following methods to solve the problem cooperatively:

– Matchmaking: Agent contacts middle agents and requests from them agent capable of solving this problem. In Evening Planner scenario such interaction occur when agent navigating semantic content finds information he does not understand (described in unknown ontology) and to solve that problem asks matchmaker to find agent or Semantic Web Service capable to solve this problem (translate to known ontology), another example is when agent delegates reposnibility of plannin and evening to specializing in this area agent (which in turn accesses other agents and Semantic Web Services).

– Contract Net: Agent broadcasts (announces) problem to all other agents (interested in cooperation). Agents that receive the announcement decide for themselves whether they wish to bid for that task (mostly based on their capabilities of expediting that task and quality constraints). Next, agent that sent task announcement must choose between bids and decide who to award the contract to. The result of this process is communicated to agents that submitted a bid. The successful contractor then expedites the task. Evening Planner utlises this technique while agreeing upon meeting schedule between client agents. Agent representing client, that wants invite frineds, sends meeting time proposal to other agents (personal agents of other users) and based on their response schedules meeting.

Negotiation
Inside Multiagent System where not all agents are benevolent, or at some aspects agents represent clients interests (that have different objectives and goals) there may be need for negotiation between such agents in order to achieve common goal or acceptable agreement. By definition (Oxford English Dictionary) negotioation may be described as:

Process by which group of agents communicate with one another to try and come to a mutually acceptable agreement on some matter.

Agents negotiate in order to influence an acquaintance. Negotiation may be achieved by:

• making proposals,
• trading options,
• offering concessions,
• (hopefully) coming to an agreement,

In order to negotiate agents should have explicit representation of negotiation issues (negotiation object), over which agreement is required.Very often it is composed of several values like:

– price
– quality
– volume
– delivery date

During negotiation main agent’s strategy is to achieve its objectives - typically in that case - agent aims to maximise its benefit. There are various types of negotiation protocols and ways to achieve mutually acceptable agreement. During that process, negotiation object may be modified or there can be made counter-proposal - alternative generated in response to a proposal and therefore proposal is re-constituted. In Evening Planner system agents that represent clients and have access to client’s schedule negotiate to find acceptable time of meeting.

Speech acts
Because agents are autonomous and exist in open systems, communication between them has to be much more sophisticated then between simple objects. Agents (instead of objects), when requested to perform any operation (by other agents) may not understand or even refuse executing requested task. Therefore agent’s communication is much more human-like and borrows its inspiration from speech act theory. Speech act theories are pragmatic theories of language, i.e., theories of language use: they attempt to account for how language is used by people every day to achieve their goals and intentions. The origin of speech act theories are usually traced to Austin’s 1962 book, How to Do Things with Words. More generally, everything people utter, is uttered with the intention of satisfying some goal or intention. Searle (1969) identified various different types of speech act:

– representatives: such as informing, e.g., ‘It is raining’
– directives: attempts to get the hearer to do something e.g., ‘please make the tea’
– commisives: which commit the speaker to doing something, e.g., ‘I promise to. . . ’
– expressives: whereby a speaker expresses a mental state, e.g., ‘thank you!’
– declarations: such as declaring war or christening.

In general, a speech act can be seen to have two components:
– a performative verb: (e.g., request, inform, . . . )
– propositional content: (e.g., “the door is closed”)

For example:
– performative = request content = “the door is closed” speech act = “please close the door”
– performative = inform content = “the door is closed” speech act = “the door is closed!”
– performative = inquire content = “the door is closed” speech act = “is the door closed?”

In order to be able to communicate, agents must have agreed a common set of terms. A formal specification of a set of terms is known as an ontology. The knowledge sharing effort has associated with it a large effort at defining common ontologies. More recently, FIPA started work on a program of agent standards — the centrepiece is an ACL (Agent Communication Language).

Basic structure is composed of:
– performative - 20 performatives in FIPA,
– housekeeping – e.g sender (meta information or context information),.
– content - the actual content of the message,

Example:

(inform
:sender agent1
:receiver agent5
:content (price good 200 150)
:language sl
:ontology hpl-auction
)

Simple ACL message content

Inform and Request are the two basic performatives in FIPA. All others are macro definitions, defined in terms of these. The meaning of inform and request is defined in two parts:
– pre-condition - what must be true in order for the speech act to succeed,
– rational effect - what the sender of the message hopes to bring about,

Summary
Following chapter was not meant to define agents as superior technology to already existing programming techniques. According to Norvig and Russel the notion of an agent is meant to be a tool for analysing systems, not an absolute characterization that divides the world into agents and non-agents. Therefore dividing all computing disciplines into agent and non-agent based systems does not make any sense. The only concepts that yield sharp edge categories are mathematical concepts, and they succeed only because they are content free. Agents live in the real world (or some world), and real world concepts yield fuzzy categories.

The Semantic Web http://www.scientificamerican.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21&pageNumber=7&catID=2
Multiagent Systems Nick Jennings, J. Wooldridge http://www.csc.liv.ac.uk/˜mjw/pubs/imas/
Ercim news European Research Consortium for Informatics and Mathematics www.ercim.org
Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents Stan Franklin and Art Graesser Institute for Intelligent Systems University of Memphis
Larks: Dynamic Matchmaking Among Heterogeneous Software Agents in Cyberspace

13 komentarzy

Towarzyszu Amato, wyczuwam pewna dyskryminacje w stosunku do, hehmm jak to sie okresliles 'chinoli'...a juz myslalem ze Polska to bardziej tolerancyjny kraj. Pocieszajace jest jednak to, ze Twoj brak tolerancji wynika z bardzo malej wiedzy (niestety, ktos musi to powiedziec), bo jak napisales: 'dobrze ze komputera nie stworzyli chinole.. bo my nam ten "geniusz" krzaczkami tu rysowal...'. Coz, 'chinole' komputera nie stworzyli, ale z tego co pamietam to oparty jest on na architekturze takiego pana von Neumanna, ktory to niestety nie jest anglikiem, wiec twoj tok myslenia (badz bardziej prawdopodobne jego zupelny brak) ma sie nijak do tego ze napisalem po angielsku, poniewaz anglicy stworzyli komputer. Czasami warto sie zastanowic co sie pisze 'gieniuszu'

dobrze ze komputera nie stworzyli chinole.. bo my nam ten "geniusz" krzaczkami tu rysowal...

Nie przesadzajcie nie jest tak zle. Jak ktos bedzie chcial to poswieci 200% czasu i przeczyta a moze nawet przetlumaczy.

Teoretycznie mam gdzieś co tam jest napisane ponieważ mój angielski jest słaby przypuszczam że nie wato wyciągać słownika. Często czytam coś o czym nie mam pojęcia no bo człowiek wszystkiego nie wie, ale to sobi odpuszczę.

A ja myślałem, że mieszkam w Polsce ;) A to tłumaczenie się we wstępie jest tępe, lepiej było napisać, że nie chciało ci się tłumaczyć.

Dobrze, że po angielsku a nie po niemiecku!
Tylko czy warto to tłumaczyć?

zawsze można przetlumaczyć jakimś English Translatorem [img]http://www.emotx.republika.pl/images/GaduGadu-2/krzywy.gif[/img]

Tłumaczenie tego na polski to kuuuupa roboty - jak chcecie o tym poczytać to macie pretekst do podszkolenia angielskiego. Do dzieła [green].

dobrze by bylo zapodac po polski, gdyz umiem angiellski(dosc dobrze :P) ale jednak niektorych slow nie znam, przez co dosc czesto zdanie tracilo dla mnie sens :(

ja bym wolał po naszemu, ale jest jak jest ;d

I bardzo dobrze ze po angielsku

hmm to już arty po angielsku są na 4p.? - widzę ze wchodzimy do UE ;)

To nie mogłeś przetłumaczyć?