Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, [email protected])

Publication, Research, Docs and Info, KAMEDA


next up previous
Next: Calculation of situation Up: Experiment Previous: Experiment

Agent design

 

We implement our method in multi-agents system proposed in the Cooperative Distributed Vision (CDV) framework[1].

Functions in this system are classified into three categories. One is to detect the dynamic situation by extracting the situation features , another is to image objects by the camera-works based on the dynamic situation , and the other is to mediate the requests of the multiple users . We design three types of agents for each function.

An agent that extracts the situation features is called an observation agent . The observation agents have different functions one another because each observation agent extracts different situation feature. The number of the observation agents is determined by the number of the situation features that are needed to detect dynamic situations .

An agent that controls an active camera and images an object is called an imaging agent. Its purpose is to realize a camera-work and generate video of the object. The number of the imaging agents is the same as that of the selected camera-works at that time.

The last kind of the agents is designed mainly to mediate the requests of the multiple users. We call this kind of agent a mediation agent. While imaging agents and observation agents are device (camera or sensor) dependent, the mediation agent is device independent. Currently, we build one mediation agent that interprets the information of the dynamic situation from the situation features and selects the camera-works based on the mediation procedure.



Yoshinari Kameda
Fri Oct 1 16:26:35 JST 1999