Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, [email protected])

Publication, Research, Docs and Info, KAMEDA


next up previous
Next: Multi-user live video Up: Description of imaging Previous: Camera-work

Imaging rule

An imaging rule is a set of functions from a dynamic situation to camera-works .

Hence, the imaging rule consists of the two-tuples, A-component and the camera-works. The A-component includes several objects and the camera-work is designed for only one object. Therefore, each A-component generally has a set of the camera-works.

The dynamic situation in the real space varies as time is passed. This is detected by the sensors, and the situation features are extracted. Referring to the other representation of dynamic situation, i.e. situation feature representation, the dynamic situation is detected.

Let us explain by an example. Suppose the user defines the imaging rule like Table2 against the A-components shown in Table1. The empty rows in the Table2 mean that the user does not want to image the object (lecturer of the A-component No.3 for example) even if the corresponding A-component is detected.

If the A-component No.1 and No.4 are detected in the real space at time t, the imaging rule consists of three camera-works and in Table2 and represents the request of the user at that time.

Whereas a certain imaging rule is defined by one director in conventional remote lecturing system or ordinary multimedia video generation, our approach allows all the users to define imaging rules so that they can proclaim their favorite imaging ways.

  
Table 2: An example of imaging rule



Yoshinari Kameda
Fri Oct 1 16:26:35 JST 1999