jpeg, gif, bmp, png), to edit their properties in a visual way and to save them as image components. As an example for such media editors we mention the image editor allowing to upload images in different formats (e.g. Recently, basic editors for creating adaptable media components (text, image, video, audio, CSS) have been created. Thus, different graphical editor modules for supporting selected steps of the overall authoring process can be developed and “plugged in” in the authoring framework. Furthermore, it allows to associate different types of custom editor plugins with arbitrary AMACONT components. It utilizes a flexible object model based on JDOM to provide programmatic access to AMACONT documents. In order to visually develop AMACONT applications a modular authoring tool is being developed.
category = ”action”) is at- tached to the affected image component and evaluated by the CDL4 module. In the above example, semantic metadata in the form of attribute-value pairs (e.g. enlarging pictures) and the rule semantics (interest in action films), component authors have to provide the observable media components with specific metadata. In order to establish the connection between low-level interactions (e.g. This interaction is captured by client-side scripts and sent to the server in the following form: Based on this interaction the CDL4 algorithm is triggered, which adds the corresponding rules to the user model: When the user comes back to this Web page or any other page containing the video list, an adapted presentation according to the updated rules is generated (see Figure 4, right). When the user is interested in getting enhanced information about videos, she maximizes the title picture or enlarges a more detailed text description (see Figure 4, middle). Still, for the sake of readability we use the above simplified formalization in this paper. In the beginning of a user session, the following “trivial” rule is created: Note that an XML grammar has been developed for representing such rules. As mentioned in Section 2.2.2, the user’s preferences are represented in form of a decision list. In the left picture the user enters the default version of the page containing no detailed information. Figure 4 shows a possible sequence of automatically generated XHTML documents of an online video store. video started/paused/stopped, image mini- mized/maximized, image printed, text enlarged/collapsed, text scrolled). video, audio, image, text) of the Web page can be observed in order to acquire interactions with them (e.g. They allow capturing user interactions on the client side and sending them back to the server, where they are stored in history lists (session profile).
Similar to the acquirement of device capabilities this is done by means of specific code fragments (JavaScript or JScript) which are embedded and configured for each media component during document gen- eration. Our developed system allows observing users’ browsing behavior by tracking interactions being performed on media components. However, capturing such preferences is a serious problem if we do not want to explicitly ask the user to give explicit information about his/her preferences. Other approaches like, or only clip or restructure Web pages to make them suitable for limited mobile devices. As mentioned in Section 2.2.2, adapting content to dynamic user preferences can be effectively used for optimizing Web pages on mobile devices. The result is an always up-to-date device profile in the device/user model on which the document generation is based on. size of the browsers window) by collecting that information via scripts directly on the client device. Furthermore, it enables to acquire permanently changing device properties (e.g. To sum up, this mechanism gives the chance for an effective and unitized handling of device capabilities of different device classes.