These are the technical reports of our institute up to the year 2015.
More recent reports are published on
Boris or
arXiv.
Zan Li, Torsten Braun
Abstract
Download
Passive positioning systems produce user location information for third-party providers of positioning services. Since the tracked wireless devices do not participate in the positioning process, passive positioning can only rely on simple, measurable radio signal parameters, such as timing or power information. In this work, we provide a passive tracking system for WiFi signals with an enhanced particle filter using fine-grained power-based ranging. Our proposed particle filter provides an improved likelihood function on observation parameters and is equipped with a modified coordinated turn model to address the challenges in a passive positioning system. The anchor nodes for WiFi signal sniffing and target positioning use software defined radio techniques to extract channel state information to mitigate multipath effects. By combining the enhanced particle filter and a set of enhanced ranging methods, our system can track mobile targets with an accuracy of 1.5m for 50% and 2.3m for 90% in a complex indoor environment. Our proposed particle filter significantly outperforms the typical bootstrap particle filter, extended Kalman filter and trilateration algorithms.
Zan Li, Danilo Burbano Acu na, Luis Carrera, Torsten Braun
Abstract
Download
Indoor positioning has become an emerging research area because of huge commercial demands for location-based services in indoor environments. Channel State Information (CSI) as a fine-grained physical layer information has been recently proposed to achieve high positioning accuracy by using range-based methods, e.g., trilateration. In this work, we propose to fuse the CSI-based ranges and velocity estimated from inertial sensors by an enhanced particle filter to achieve highly accurate tracking. The algorithm relies on some enhanced ranging methods and further mitigates the remaining ranging errors by a weighting technique. Additionally, we provide an efficient method to estimate the velocity based on inertial sensors. The algorithms are designed in a network-based system, which uses rather cheap commercial devices as anchor nodes. We evaluate our system in a complex environment along three different moving paths. Our proposed tracking method can achieve 1.3m for mean accuracy and 2.2m for 90% accuracy, which is more accurate and stable than pedestrian dead reckoning and range-based positioning.
E. Bourtsoulatze, N. Thomos, J. Saltarin and T. Braun
Abstract
Download
In this work, we propose a novel network coding enabled NDN architecture for the delivery of scalable video. Our scheme utilizes network coding in order to address the problem that arises in the original NDN protocol, where optimal use of the bandwidth and caching resources necessitates the coordination of the forwarding decisions. To optimize the performance of the proposed network coding based NDN protocol and render it appropriate for transmission of scalable video, we devise a novel rate allocation algorithm that decides on the optimal rates of Interest messages sent by clients and intermediate nodes. This algorithm guarantees that the achieved flow of Data objects will maximize the average quality of the video delivered to the client population. To support the handling of Interest messages and Data objects when intermediate nodes perform network coding, we modify the standard NDN protocol and introduce the use of Bloom filters, which store efficiently additional information about the Interest messages and Data objects. The proposed architecture is evaluated for transmission of scalable video over PlanetLab topologies. The evaluation shows that the proposed scheme performs very close to the optimal performance.
H. Mercier, T. Braun, P. Felber, P. Kropf, P. Kuonen, E. Riviere (eds.)
Abstract
Download
The Doctoral Workshop on Distributed Systems has been held at Le Louverain, Neuchâtel, Switzerland, from June 21-23, 2015. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop.
C. Anastasiades, A. Gomes, R. Gadow, T. Braun
Abstract
Download
Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line-speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing the cache hit rates.
Dima Mansour and Torsten Braun
Abstract
Download
Information-Centric Networking (ICN) considers the content as the building block of the network instead of hosts. This allows clients to request content objects by their names but not by the addresses of their providers. One limitation of ICN is that it does not naturally support services. Service-Centric Networking considers services as first-class citizens in the network shifting the current Internet architecture towards a Service-centric style. In this paper, we discuss and compare several approaches in Service-Centric Networking with respect to goals, architectures, naming schemes, and design decisions. We demonstrate the results and the future challenges of those approaches in general to be considered as guidelines for future research in the Service-Centric Networking field.
Dima Mansour and Torsten Braun
Abstract
Download
NextServe is a Service-Centric Networking approach built on top of Content-Centric Networking (CCN). NextServe allows for the publishing and invocation of remote services on top of the name-based CCN routing. NextServe has a human-readable naming scheme that supports caching, parameter passing, and service aggregation. In this report, we enhance the architecture and the naming scheme of NextServe to support heterogeneous applications, services, and protocols. We evaluate the performance of NextServe regarding service response time and caching. We show that NextServe has an insignificant overhead and supports CCN caching efficiently.
Islam Alyafawi, Eryk Schiller, Torsten Braun, Desislava Dimitrova, Andre Gomes, Navid Nikaein
Abstract
Download
Cloudification of the Centralized-Radio Access Network (C-RAN), in which signal processing runs on general purpose processors inside virtual machines, has lately received significant attention. Due to short deadlines in the LTE Frequency Division Duplex access method, processing time fluctuations introduced by the virtualization process have a deep impact on C-RAN performance. This report evaluates bottlenecks of the OpenAir-Interface (OAI is an open-source software-based implementation of LTE) cloud performance, provides feasibility studies on C-RAN execution, and introduces a cloud architecture that significantly reduces the encountered execution problems. In typical cloud environments, the OAI processing time deadlines cannot be guaranteed. Our proposed cloud architecture shows good characteristics for the OAI cloud execution. As an example, in our setup more than 99.5% processed LTE subframes reach reasonable processing deadlines close to performance of a dedicated machine.
H. Mercier, T. Braun, P. Felber, P. Kropf, P. Kuonen (eds.)
Abstract
Download
The Doctoral Workshop on Distributed Systems has been held at Kandersteg, Switzerland, from June 3-5, 2014. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop.
H. Mercier, T. Braun, P. Felber, P. Kropf, P. Kuonen (eds.)
Abstract
Download
The Doctoral Workshop on Distributed Systems has been held at Plans-sur-Bex, Switzerland, from June 26-28, 2013. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop.
Denis Rosario, Zhongliang Zhao, Eduardo Cerqueira, Torsten Braun, Aldri Santos
Abstract
Download
Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, while also predicting hazardous events. In these mobile and dynamic (possible disaster) scenarios, opportunistic routing allows routing decisions in a completely distributed manner, by using a hop-by-hop route decision based on protocol-specific characteristics, and a predefined end-to-end path is not a reliable solution. This enables the transmission of video flows of a monitored area/object with Quality of Experience (QoE) support to users, headquarters or IoT platforms. However, existing approaches rely on a single metric to make the candidate selection rule, including link quality or geographic information, which causes a high packet loss rate, and reduces the video perception from the human standpoint. This article proposes a cross-layer Link quality and Geographical-aware Opportunistic routing protocol (LinGO), which is designed for video dissemination in mobile multimedia IoT environments. LinGO improves routing decisions using multiple metrics, including link quality, geographic location, and energy. The simulation results show the benefits of LinGO compared with well-known routing solutions for video transmission with QoE support in mobile scenarios.
H. Mercier, T. Braun, P. Felber, P. Kropf, P. Kuonen, E. Rivière (eds.)
Abstract
Download
The Doctoral Workshop on Distributed Systems has been held at La Vue-des-Alpes, Switzerland, from July 4-6, 2012. Ph.D. students from the Universities of Neuchâtel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of each talk given during the workshop.
A.-F. Antonescu, P. Robinson, T. Braun
Abstract
Download
Cost-efficient operation while satisfying performance and availability guar-antees in Service Level Agreements (SLAs) is a challenge for Cloud Com-puting, as these are potentially conflicting objectives. We present a frame-work for SLA management based on multi-objective optimizations. Theframework features a forecasting model for determining the best virtualmachine-to-host allocation given the need to minimize SLA violations, en-ergy consumption and waste. A comprehensive SLA management solu-tion is proposed that uses event processing for monitoring and enablesdynamic provisioning of virtual machines onto the physical infrastructure.We validated our implementation against serveral standard heuristics andwere able to show that our approach is significantly better.
Desislava Dimitrova
Abstract
Download
The WISH (Wireless Integration of Sensor Networks in Hybrid Architectures) seminar was an opportunity for researchers from both academia and industry to share their work and views on the development in Wireless Sensor Networks (WSNs) and specifically their incorporation in hybrid networks. The seminar focused on research efforts at the territory of Switzerland. By bringing together academic researchers from different but related areas in contact with industrial peers, the seminar gave an opportunity to share expertise on ongoing research and stimulated the identification of future trends and collaboration possibilities. The seminar program included ten technical talks, which provide a multidisciplinary view on WSNs and, in particular, on their deployment and practical value for variety of real-world applications. A major discussion topic was the deployment of sensor networks in real environments, e.g., for the purpose of precipitation monitoring, and the many challenges arising from that. Another strong focus was on the benefits of using sensor networks for enabling sustainable living. A third group of talks addressed the more technical aspects of developing a functional wireless sensor network,i.e., design, architectures and algorithms. The technical talks provided a valuable feedback on what is being researched by academia, what is of interest for industry and what is the current state of match between the two areas. These proceedings contain the abstracts of the technical talks along with contact information of the authors.
I.F.R. Noppen, D.C. Dimitrova, T. Braun
Abstract
Download
The main challenge in wireless networks is to optimally use the confined radio resource to support data transfer. This holds for large-scale deployments as well as for small-scale test-environments such as test-beds. We investigate two approaches to reduce the radio traffic in a test-bed, namely, filtering of unnecessary data and aggregation of redundant data. Both strategies exploit the fact that, depending on the tested application’s objective, not all data collected by the test-bed may be of interest. The proposed design solutions indicate that traffic reduction as high as 97% can be achieved in the target test-bed.
Philipp Hurni, Torsten Braun
Abstract
Download
This paper investigates the potential of Forward Error Correction (FEC) mechanisms and dynamic/run-time adaptive FEC variants in Wireless Sensor Networks. We implemented eight different Error Correction Codes (ECCs), ranging from simple bit-repetition schemes over Hamming codes to complex Bose-Chaudhuri-Hocquenghem codes, and three run-time adaptive FEC schemes which adapt the correctional power of ECCs to the current link quality. The paper evaluates the computational costs and the resulting benefits of the FEC schemes under real-world conditions in a distributed testbed environment.
Martin Giese and Roman Kuznets (editors)
Abstract
Download
This booklet contains the research papers presented at the International Workshop on First-Order Theorem Proving held in Bern, Switzerland, July 4–5, 2011.The workshop was the eighth in a series of international workshops held since1997, intended to focus effort on First-Order Theorem Proving as a core themeof Automated Deduction, and to provide a forum for presentation of recent workand discussion of research in progress
P. Hurni, T. Braun
Abstract
Download
Energy-Efficient Medium Access (E2-MAC) protocols duty cycling the radio transceiver typically trade off energy-efficiency versus classical quality of service (QoS) parameters (throughput, latency, reliability). Today’s E2-MAC protocols are able to deliver little amounts of data with a low energy footprint, but introduce severe restrictions with respect to throughput and latency. Regrettably, they yet fail to adapt to varying traffic load at run-time. This paper studies the Maximally Traffic-Adaptive MAC protocol MaxMAC, which targets at achieving maximal adaptability to variable traffic conditions at run-time. By thoroughly analyzing the performance of MaxMAC in a series of distributed experiments, the paper conveys that under variable traffic conditions, MaxMAC effectively combines the advantages of energy unconstrained CSMA (high throughput, high PDR, low latency) with those of classical E2-MAC protocols (high energy-efficiency).
Geoff Coulson, Torsten Braun, Thomas Staub
Abstract
Download
In this document we present an initial design that accommodates ‘virtual mobility’ into testbeds for wireless sensor networks. The virtual mobility of physical, simulated or emulated nodes is treated in a uniform manner by embedding the nodes in a vir tual space. The virtual space is formed by a simulation model and handles all the traffic from the nodes. The traffic of physical nodes is therefore intercepted and redirected to this model. We discuss the aspects of ‘virtual mobility’ and provide an initial design that suppor ts ‘virtual mobility’ across a federated testbed for wireless sensor networks.
Kirsten Dolfus, Torsten Braun
Abstract
Download
Data gathering, either for event recognition or for monitoring applications is the primary intention for sensor network deployments. In many cases, data is acquired periodically and autonomously, and simply logged onto secondary storage (e.g. flash memory) either for delayed offline analysis or for on demand burst transfer. Moreover, operational data such as connectivity information, node and network state is typically kept as well. Naturally, measurement and/or connectivity logging comes at a cost. Space for doing so is limited. Finding a good representative model for the data and providing clever coding of information, thus data compression, may be a means to use the available space to its best. In this report, we explore the design space for data compression for wireless networks by profiling common, publicly available algorithms. Several goals such as a low overhead in terms of utilized memory and compression time as well as a decent compression ratio have to be well balanced in order to find a simple, yet effective compression scheme.
K. Brünnler and T. Studer (editors)
Abstract
Download
The aim of PCC is to stimulate research in proof theory, computation, and complexity, focusing on issues which combine logical and computational aspects. Topics may include applications of formal inference systems in computer science, as well as new developments in proof theory motivated by computer science demands. Specific areas of interest are (nonexhaustively listed) foundations for specification and programming languages, logical methods in specification and program development including program extraction from proofs, type theory, new developments in structural proof theory, and implicit computational complexity.
Thomas Staub, Markus Anwander, Marc Brogle, Kirsten Dolfus, Torsten Braun, Kurt Baumann, Christian Félix, Pascal Dornier
Abstract
Download
Wireless Mesh Networks (WMNs) operating in the 5GHz band (IEEE 802.11 a/h) offer a great opportunity to function as wireless access networks. Remote sites that lack direct access to a fibre network may benefit from this technology as they can be used to bridge possibly large distances. The high gain of directional antennas amplifies the reception of signals in focused directions and reduces interference from unwanted sources. Therefore, they are the better choice for such bridging scenarios. In this document, we report the results of the feasibility study "Wireless Mesh Networks for Interconnection of Remote Sites to Fixed Broadband Networks (Feasibility Study)". We present our experiences with setting up such a Wireless Access Network using directional antennas in the area of Neuchâtel, Switzerland. We describe the necessary equipment and planning steps, highlight common pitfalls and discuss gained insights as well as experimental results. Measured data supports the feasibility of our networking approach, yet reveals the high impact of general challenges that have to be overcome in real-world deployments. Moreover, the project results are discussed from the viewpoint of the project partners.
Marc Brogle, Torsten Braun (eds.)
Abstract
Download
The BeNeFri Summer School 2009 on Dependable Systems in Schloss Münchenwiler from June 10-12 was organized by four research groups from the Universities of Bern, Fribourg and Neuchâtel. The University of Bern was represented by the research group “Computer Networks and Distributed System” of the Institute of Computer Science and Applied Mathematics at the University of Bern, headed by Prof. Torsten Braun. The research group “Telecommunications, Networks, Security” of the Department of Computer Science, headed by Prof. Ulrich Ultes-Nitsche, represented the University of Fribourg. The University of Neuchâtel was represented by two research groups from the “computer science department (IIUN)”, namely “distributed systems” headed by Prof. Peter Kropf and “dependable systems and networks” headed by Prof. Pascal Felber. The focus of this retreat was to present and discuss recent research results and currently ongoing research activities of the members of the four research groups. The research group members gave twenty-two presentations, from the areas of overlay networks, wireless mesh and sensor networks, network security, distributed systems, zero-proof knowledge, transactions and security in e-voting. Extensive time (typically 45 minutes per talk) has been allocated to allow detailed presentations and discussions. This technical report summarizes the various talks and describes mostly unpublished work that is currently in progress.
David Röthlisberger, Oscar Nierstrasz, Stephane Ducasse, Damien Pollet, Romain Robbes
Abstract
Mainstream IDEs generally rely on the static structure of a software project to support browsing and navigation. Previous research has shown that other forms of information, such as evolution of the software artifacts, historical information capturing how previous developers have navigated the code, or runtime behavior of the application, can be much more useful than the static structure of the software to direct developers to parts of the code relevant to a task-at-hand. These different kinds of information, however, are not uniformly accessible from the IDE, so the developer may have to struggle with heterogeneous tools and visualizations to find the relevant artifacts. We propose HeatMaps, a simple but highly configurable technique to enrich the way an IDE displays the static structure of a software system with additional kinds of information. A heatmap highlights software artifacts according to various metric values, such as bright red or pale blue, to indicate their potential degree of interest. Heatmaps can be dynamically configured to reflect different kinds of information relevant to a given task-at-hand. We present a prototype system that implements heatmaps, and we describe an initial study that assesses the degree to which different heatmaps effectively guide developers in navigating software.
Jacek Jonczy, Rolf Haenni
Abstract
Download
The theories of reliability and diagnostics, despite their close relationship, have been mostly treated as two separate fields in the literature. However, it has been shown in the past few years that their dual character can be exploited in the context of modular systems. Such systems have a great practical impact on reliability computation and diagnostics, since their structure can be used to significantly reduce the computational effort needed for their evaluation. It is also known that the underlying structure function can be represented in a compact way by Propositional Directed Acyclic Graphs (PDAGs), which form a powerful Boolean representation language. Recently, PDAGs have been also applied successfully in the context of network reliability. The present paper continues this line of research and proposes several extensions. First, we introduce a unifying formal model for system description, reliability evaluation, and diagnostics. Besides the formal specification of modular systems and reliability networks, the model allows networks and modules to be combined arbitrarily, leading to so-called hybrid systems. The underlying computational machinery provided by PDAGs allows to evaluate reliability in a convenient and efficient way. Second, we provide concise algorithms for both generating structure functions with respect to network reliability problems as well as for PDAG manipulations, and we also examine their complexity. Third, we show that posterior probabilities of system elements can be efficiently computed using PDAGs for diagnostic purposes, in particular in modular systems.
Ricardo Wehbe
Abstract
Download
There is no completely satisfying cut-free axiomatization for Propositional Linear-Time Temporal Logic (PLTL). In this paper we show that it is possible to limit the cut-formulas to a the Fischer-Ladner closure (strictly speaking, to something similar to the Fischer-Ladner closure) of the sequent whose validity is to be proved.
Camillo Bruni, Toon Verwaest, Marcus Denker
Abstract
Download
Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.
Luca Alberucci
Abstract
Download
We provide a purely syntactical treatment of simultaneous fixpoints in the modal mu-calculus by proving directly in Kozen's axiomatisation their properties as greatest and least fixpoints, that is, the fixpoint axiom and the induction rule. Further, we apply our result in order to get a completeness result for characteristic formulae of finite pointed transition systems.
Marc Brogle, Torsten Braun (eds.)
Abstract
Download
The BeNeFri Summer School 2008 on Dependable Systems in Quarten from June 23-26 was organized by four research groups from the Universities of Bern, Fribourg and Neuchˆ atel. The University of Bern was represented by the research group “Computer Networks and Distributed System” of the Institute of Computer Science and Applied Mathematics at the University of Bern, headed by Prof. Torsten Braun. The research group “Telecommunications, Networks, Security” of the Department of Computer Science, headed by Prof. Ulrich Ultes-Nitsche, represented the University of Fribourg. The University of Neuchˆ atel was represented by two research groups from the “computer science department (IIUN)”, namely “distributed systems” headed by Prof. Peter Kropf and “dependable systems and networks” headed by Prof. Pascal Felber. The focus of this retreat was to present and discuss recent research results and currently ongoing research activities of the members of the four research groups. The research group members gave twenty-two presentations, form the areas of overlay networks, wireless mesh and sensor networks, network security, distributed systems, anonymity, transactions and automata theory. Extensive time (typically 45 minutes per talk) has been allocated to allow detailed presentations and discussions. This technical report summarizes the various talks and describes mostly unpublished work that is currently in progress.
Ph. Stouppa, Th. Studer
Abstract
Download
Information systems support data privacy by granting access only to certain (public) views. The data privacy problem is to decide whether hidden (private) information may be inferred from the public views and some additional general background knowledge. We study this problem in the context of ALC knowledge bases. First we show that the ALC privacy problem wrt. concept retrieval and subsumption queries is ExpTime-complete. Then we provide a sufficient (but not necessary) condition for data privacy that can be checked in PTime. This second approach is directly applicable to modular ontologies.
Oscar Nierstrasz, Marcus Denker, Tudor Gîrba, Adrian Kuhn, Adrian Lienhard, David Röthlisberger
Abstract
Download
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. These systems also last longer than their developers might imagine --- they are, in effect, \emph{eternal}. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of eternal systems, we must make them self-aware. A self-aware eternal system supports evolution by: (i) providing explicit, first-class models of software artifacts, change and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of self-aware eternal systems and identify the research challenges to realizing this vision.
Torsten Braun, Ulrich Ultes-Nitsche, Marc Brogle, Dragan Milic, Patrick Lauer, Thomas Staub, Gerald Wagenknecht, Markus Anwander, Markus Waelchli, Markus Wulff, Carolin Latze, Michael Hayoz, Christoph Ehret, Thierry Nicola
Abstract
Download
The research group “Computer Networks and Distributed System” of the Institute of Computer Science and Applied Mathematics at the University of Bern, headed by Prof. Torsten Braun, organized an internal retreat from July 2-4, 2007 at Quarten. The reserach group “Telecommunications, Networks, Security” of the Department of Computer Science at the University of Fribourg, headed by Prof. Ulrich Ultes-Nitsche also participated in the event. The focus of this retreat was to present and discuss recent research results and currently ongoing research activities of the members of both research groups. The research group members gave fourteen presentations, form the areas of overlay networks, wireless mesh and sensor networks, network security, distributed systems and automata theory. Extensive time (typically 60 minutes per talk) has been allocated to allow detailed presentations and discussions. This technical report summarizes the various talks and describes mostly unpublished work that is currently in progress.
Markus Anwander, Gerald Wagenknecht, Torsten Braun
Abstract
Download
Heterogeneous wireless sensor networks (WSN) consist of a number of several heterogeneous sensor nodes. Typically different sensor node platforms are equipped with different types of radio modules and therefore cannot communicate with each other. To interconnect the different sensor sub-networks, a wireless mesh network (WMN) operates as a backbone. For each sensor sub-networks a sensor node gateway is connected to every mesh node of the WMN. We choose Contiki as an appropriate operation system to support dynamic and energy-efficient reconfiguration and code updates of different sensor node platforms. Contiki supports preemptive multithreading, inter-process communication and dynamic runtime linking of standard Executable Linkable Format (ELF) files. The lightweight and compact base system is highly portable to other sensor platforms. Contiki also offers a minimal TCP/IP stack for communication. The management architecture consists of the following infrastructural elements: a management station, a number of management nodes and a high number of heterogeneous sensor nodes. The supported management tasks are: monitoring the WSN, configuration of the WSN, code updates and managing sensor data. The management station executes all management tasks. Management nodes are implemented as wireless mesh nodes.
Markus Anwander, Gerald Wagenknecht, Torsten Braun
Abstract
Download
The first work package defines the overall wireless sensor network (WSN) management and code distribution framework. This includes the determination of appropriate sensor node platforms and an appropriate sensor operating system. Furthermore, it includes the definition and implementation of a management architecture for WSNs. In a first step several types of sensor nodes and operating systems have been evaluated. Based on the identified management tasks required in a WSN a management architecture has been defined. Management tasks to be supported are: monitoring the WSN, configuration of the WSN, code updates and managing sensor data. The management architecture consists of the following infrastructural elements: a management station, a number of management nodes and a high number of heterogeneous sensor nodes. All management tasks are controlled by the management station. Management nodes are implemented as wireless mesh nodes. The next steps include the adaptation of Contiki, the chosen operating system, to have it running on all used sensor platforms. Further, to access and interconnect the different sensor sub-networks, gateways running on the mesh nodes have to be developed. In parallel, the implementation of the management architecture will start. IP support is planned for all communication to and within the heterogeneous WSN. Reliable transport protocols will be developed for unicast, multicast and broadcast communication. Therefore, TCP and existing multicast and broadcast protocols have to be adapted as described in WP2.
Gerhard Jäger, Mathis Kretz, Thomas Studer
Abstract
Download
We explore cut-free deductive systems for the propositional modal mu-calculus. We begin with setting up two very natural infinitary calculi K^+_omega(mu) and K_omega(mu) and collapse K_omega(mu) to a finite cut-free system K_{< omega}(mu) afterwards. All three systems are shown to be sound and complete formalizations of the propositional modal mu-calculus.
Piotr Skoczylas, Markus Wälchli, and Torsten Braun
Abstract
Download
Event tracking is one of the basic tasks of sensor networks. An intensity-based event localization algorithm was developed at the University of Bern [1]. The inaccuracy of the measurements and resource limitations in terms of energy consumption, storage capacity, processing and transmitting time make a real world implementation of this algorithm a challenging and exciting task. In this internship we investigated the real world feasibility and performance of this distributed object localization and tracking algorithm DELTA that depends on the light sensor readings. Some improvements have been made to the original DELTA algorithm to tailor it to the ESB hardware from Scatterweb [2]. Finally, evaluations and comparisons of different versions of DELTA as well as of a reference algorithm have been done.
Ricardo Wehbe
Abstract
Download
Sets of clauses are often used to represent the knowledge or beliefs of an agent. Usually sets of clauses represent incomplete information, because the agent does not have a complete "picture of the world." Thus, there is usually not a single model but a set of models that satisfy the set of clauses representing the worlds that the agent considers possible. We define the invariant of a propositional set of clauses as the set of belong to all models and present a sequent system that may be used to construct such a set. We prove that the system is sound (all atoms that can be inferred with it are in the invariant), complete (all atoms in the invariant may be inferred with the sequent system) and that the computation ends after a finite number of steps.
Michael Wachter and Rolf Haenni
Abstract
Download
This paper presents a new approach to inference in Bayesian networks with Boolean variables. The principal idea is to encode the network by logical sentences and to compile the resulting CNF into a deterministic DNNF. From there, all possible queries are answerable in linear time relative to its size. This makes it a potential solution for real-time applications of probabilistic inference with limited computational resources. The underlying idea is similar to Darwiche’s differential approach to inference in Bayesian networks, but the core of the proposed CNF encoding is slightly different. This alternative encoding enables a more intuitive and elegant solution, which is apparently more efficient.
Philippe C.D. Robert, Severin Schoepke
Abstract
Download
In recent years, interactive ray tracing has become realisable, albeit mainly using clustered workstations and sophisticated acceleration structures. On nonclustered computer architectures this is still not an easy task to achieve, especially when rendering animated scenes, even though the computation power of modern workstations is increasing rapidly. In this paper we propose commonly known image-space rendering techniques to be used in the context of ray tracing. We describe a visibility preprocessing algorithm to perform interactive ray tracing based on the standard depth testing capability of graphics processing units. This method – item buffer rendering – is particularly suitable for rendering animated scenes, as it completely avoids the necessity of creating and updating any kind of spatial acceleration structure in order to achieve high frame rates. The item buffer stores indices referencing those primitives which are visible in screen-space. Primary ray intersection testing can therefore be executed very efficiently by performing only one primitive lookup operation and one intersection test. As a consequence, this approach reduces the total number of primary ray intersection tests to a minimum. Additionally we integrate shadow rendering into our ray tracer using the shadow mapping technique to avoid computationally expensive shadow rays. We compare CPU and GPU based implementations of our ray tracer and analyse the advantages and disadvantages of both approaches in terms of visual quality and rendering performance.
Attila Weyland and Torsten Braun
Abstract
Download
OSLab is the third generation in a series of distance learning courses established by mostly Swiss and some international universities. The underlying didactics have been developed in the VITELS project and written down in a didactics and design guide. The guide has been further extended in the EuQoS project. This document tries to reduce the guide to a reasonable size and incorporate the experiences made during the operation of the VITELS course. In the creation process of a learning course, we identified two possible roles: module authors and course authors. A module author creates a learning module. A course author defines the general layout and structure of the course as well as the modules. This document guides you through the process of creating a learning module for the Operating System Laboratory and therefore addresses module authors.
Attila Weyland and Torsten Braun
Abstract
Download
OSLab is the third generation in a series of distance learning courses established by mostly Swiss and some international universities. The underlying didactics have been developed in the VITELS project and written down in a didactics and design guide. The guide has been further extended in the EuQoS project. This document tries to reduce the guide to a reasonable size and incorporate the experiences made during the operation of the VITELS course. In the creation process of a learning course, we identified two possible roles: module authors and course authors. A module author creates a learning module. A course author maintains the structure and design of the course as well as the modules. This document guides you through the process of creating a learning course for the Operating System Laboratory and therefore addresses course authors.
Laura Ponisio, Oscar Nierstrasz
Abstract
Complex systems are decomposed into cohesive packages with the goal of limiting the scope of changes: if our packages are cohesive, we hope that changes will be limited to the packages responsible for the features we are changing, or at worst the packages that are immediate clients of those features. But how should we measure cohesion? Traditional cohesion metrics focus on the explicit dependencies and interactions between the classes within the package under study. A package, however, may be conceptually cohesive even though its classes exhibit no explicit dependencies. We propose a group of contextual metrics that assess the cohesion of a package based on the degree to which its classes are used together by common clients. We apply these metrics to various case studies, and contrast the degree of cohesion detected with that of traditional cohesion metrics. In particular, we note that object-oriented frameworks may appear not to be cohesive with traditional metrics, whereas our contextual metrics expose the implicit cohesion that results from the framework's clients.
Michael Wachter, Rolf Haenni
Abstract
Download
The canonical representation of Boolean functions offered by OBDDs (ordered binary decision diagrams) allows to decide the equivalence of two OBDDs in polynomial time with respect to their size. It is still unknown, if this holds for other more succinct supersets of OBDDs such as FBDDs (free binary decision diagrams) and d-DNNFs (deterministic, decomposable negation normal forms), but it is known that the equivalence test for these is probabilistically decidable in polynomial time. In this paper, we show that the same probabilistic equivalence test is also possible for cd-PDAGs (decomposable, deterministic propositional DAGs). cd-PDAGs are more succinct than d-DNNFs and therefore strictly more succinct than FBDDs and OBDDs.
Oscar Nierstrasz, Stéphane Ducasse and Nathanael Schärli
Abstract
Download
Traits are fine-grained components that can be used to compose classes, while avoiding many of the problems of multiple inheritance and mixin-based approaches. Since most implementations of traits have focused on dynamically-typed languages, the question naturally arises, how can one best introduce traits to statically-typed languages, like Java and C#? In this paper we argue that the flattening property of traits should be used as a guiding principle for any attempt to add traits to statically-typed languages. This property essentially states that, semantically, traits can be compiled away. We demonstrate how this principle applies to FTJ, a conservative extension to Featherweight Java.
Philippe C.D. Robert, Daniel Schweri
Abstract
Download
In this report we present a new system for visualising smoke an similar effects based on the Navier-Stokes equations. The system is optimised mainly for high rendering performances, targeting interactive real-time applications such as computer games or visual simulations. As such the system supports both static and moving obstacles of arbitrary shape. We demonstrate the effect of using SIMD operations when optimising the fluid solver and we introduce a parallel execution model for balancing the workload between multiple processor threads and the graphics hardware (GPU). Finally, we present four methods to visualise smoke and discuss their efficiency and the achieved visual realism.
Torsten Braun, Marc-Alain Steinemann, Thomas Bernoulli, Marc Brogle, Marc Danzeisen, Dragan Milic, Matthias Scheidegger, Thomas Staub, Markus Waelchli, Attila Weyland
Abstract
Download
The research group ”Computer Networks and Distributed Systems” of the Institute of Computer Science and Applied Mathematics at the University of Bern, headed by Prof. Torsten Braun, organized an internal retreat from June 27-30, 2005 at Griesalp / Kiental. The focus of this retreat was to present and discuss recent research results and currently ongoing research activities of the research group members. The research group members gave eleven presentations, most of them in the areas overlay networks, wireless mesh and ad hoc networks as well as sensor networks. Extensive time (typically 90 minutes per talk) has been allocated to allow detailed presentations and discussions. This technical report summarizes the various talks and describes mostly unpublished work that is currently in progress.
Stéphane Ducasse, Serge Stinckwich
Abstract
Download
A selection of papers presented at the Smalltalk Conference of the yearly Smalltalk event organized by ESUG – the European Smalltalk Users Group. For the last three years, the goal of the Smalltalk Conference is to give wide academic recognition to high-quality research done in or with Smalltalk. The topics we are interested in are (in a non-exhaustive list): new languages features (mixins, AOP, ...), meta and reflective programming, code analysis (refactoring, ...), process development (Agile processes, Unit testing, ...), virtual machines (optimization, newtrends, ...), frameworks (web, graphical, ...), software evolution (metrics, ...).
Marc Heissenbüttel, Torsten Braun, Markus Wälchli, and Thomas Bernoulli
Abstract
Download
In this report we present a simple and stateless broadcasting protocol called Dynamic Delayed Broadcasting (DDB) which allows locally optimal broadcasting without any knowledge about the neighborhood. As DDB does not require any transmissions of control messages, it conserves critical network resources such as battery power and bandwidth. Local optimality is achieved by applying a principle of Dynamic Forwarding Delay (DFD) which delays the transmissions dynamically and in a complete distributed way at the receiving nodes such that nodes with a higher probability to reach new nodes transmit first. An optimized performance of DDB over other stateless protocols is shown by analytical results. Furthermore, simulation results show that, unlike stateful broadcasting protocols, the performance of DDB does not suffer in dynamic topologies caused by mobility and sleep cycles of nodes. These results together with its simplicity and the conservation of network resources, as no control message transmissions are required, make DDB especially suited for sensor and vehicular ad-hoc networks.
Kilian Stoffen and Thomas Studer
Abstract
Download
It is an important question to ask what can be deduced from a given view instance and knowledge about the relational schema with its constraints. We present a procedure which computes a minimal database that yields the given view instance and satisfies the constraints on the relational schema. It is minimal in the sense that it contains exactly the certain answers to queries asking for basic relations. In this paper, we consider only conjunctive view definitions and unique key constraints.
Stephane Ducasse
Abstract
Download
A selection of papers presented at the Smalltalk Conference of the yearly Smalltalk event organized by ESUG - the European Smalltalk Users Group.
Stephane Ducasse and Michele Lanza and Laura Ponisio
Abstract
Download
Understanding packages is an important activity in the reengineering of large object-oriented systems. The relationships between packages and their contained classes can affect the cost of modifying the system. The main problem of this task is to quickly grasp the structure of a package and how it interacts with the rest of the system. In this paper we present a top-down program comprehension strategy based on polymetric views, radar charts, and software metrics. We illustrate this approach on two applications and show how we can retrieve the important characteristics of packages.
IAM
Download
Ruy de Oliveira and Torsten Braun
Abstract
Download
Multihop wireless networks based on the IEEE 802.11 MAC protocol are promising for ad hoc networks in small scale today. The 802.11 protocol minimizes the well-known hidden node problem but does not eliminate it completely. Consequently, the end-to-end bandwidth utilization may be quite poor if the involved protocols do not interact smoothly. In particular, the TCP protocol does not manage to obtain efficient bandwidth utilization because its congestion control mechanism is not tailored to such a complex environment. The main problems with TCP in such networks are the excessive amount of both spurious retransmissions and contentions between data and acknowledgement (ACK) packets for the transmission medium. In this paper, we propose a dynamic adaptive strategy for minimizing the number of ACK packets in transit and mitigating spurious retransmissions. Using this strategy, the receiver adjusts itself to the wireless channel condition by delaying more packets when the channel is in good condition and less otherwise. Our technique not only improves bandwidth utilization but also reduces power consumption by retransmitting much less than a regular TCP does. Extensive simulation evaluations show that our scheme provides very good enhancements in a variety of scenarios.
Philippe C.D. Robert and Daniel Schweri
Abstract
Download
We take advantage of recent improvements in graphics hardware, originally designed to increase the visual quality of rendered scenes, to study intersection testing algorithms on programmable graphics hardware. We implement two of the most commonly used algorithms in Nvidia Cg and compare the performance of our implementations with traditional, ANSI-C based variants to demonstrate the advantages and disadvantages of using programmable graphics hardware as general purpose coprocessor.
Alexandre Bergel and Stephane Ducasse and Oscar Nierstrasz and Roel Wuyts
Abstract
Download
A class extension is a method that is defined in a module, but whose class is defined elsewhere. Class extensionsoffer a convenient way to incrementally modify existing classeswhen subclassing is inappropriate. Unfortunately existing approaches suffer from various limitations. Either class extensions have a global impact, with possibly negative effects for unexpected clients, or they have a purely local impact, with negative results for collaborating clients. Furthermore, conflicting class extensions are either disallowed, or resolved by linearization, with consequent negative effects. To solve these problems we present classboxes, a module system for object-oriented languages that provides for method addition and replacement. Moreover, the changes made by a classbox are only visible to that classbox (or classboxes that import it), a feature we call local rebinding. To validate the model, we have implemented it in the Squeak Smalltalk environment, and performed experiments modularising code.
Matthias Rieger and Stephane Ducasse
Abstract
Download
Duplicated code can have a severe, negative impact on the maintainability of large software systems. Techniques for detecting duplicated code exist but they rely mostly on parsers, technology that is often fragile in the face of different languages and dialects. We have proposed a lightweight approach based on simple string-matching that can be effectively used to detect a significant amount of code duplication. The approach scales well, and can be easily adapted to different languages and contexts. We validate our approach by applying it to a number of industrial and open source case studies, involving five different implementation languages and ranging from 256KB to 13MB of source code.
Simon Günter
Abstract
Download
Das Ziel des vorliegenden Berichts besteht darin, Studierenden und Mitarbeiterinnen der Fachgruppe für Künstliche Intelligenz am Institut für Informatik und angewandte Mathematik der Universität Bern einen kurzen Leitfaden zu geben, um Klassifikationsmethoden zu testen und miteinander zu vergleichen.
Markus Gälli and Oscar Nierstrasz and Roel Wuyts
Abstract
Download
A single software fault may cause several tests to break, if they cover the same methods. The coverage sets of tests may not just overlap, but include one another. This information could be of great use to developers who would like to focus on the most specific test that concerns a given fault. Unfortunately, existing unit testing tools neither gather nor exploit this information. We have developed a simple approach that analyses a set of test suites, and infers the partial order corresponding to inclusion hierarchy of the coverage sets. When several tests in an inclusion chain break, we can guide the developer to the most specific test in the chain. Our first experiments with three case studies suggest that most unit tests for typical applications are, in fact, comparable to other tests, and can therefore be partially ordered. Furthermore, we show that this partial order is semantically meaningful, since faults that cause a test to break will, in nearly all cases cause less specific tests to break too.
Marc-Alain Steinemann and Thomas Spreng and Aljoscha Bachmayer and Torst
Abstract
Download
A very federal higher educational system with many universities and as many different student authentication systems exists in Switzerland. Initiated by the Swiss Virtual Campus and by the demand for more inter-university work and student mobility, SWITCH, the Swiss education and research network started to evaluate authentication and authorization architectures. Independently of the selection of Shibboleth Authentication and Authorization Infrastructure for Switzerland, there are Resources that have to be adapted to the chosen infrastructure. To ease this adaptation process, we present a generic Authentication and Authorization Infrastructure portal as the connecting unit between Home Organizations and Resources. The architecture as well as the portal's environment followed by a chapter about a prototype implementation get discussed.
The research group on "Computer Networks and Distributed Systems" of the Institute of Computer Science and Applied Mathematics at the University of Berne led by Prof. Torsten Braun focuses the research activities in the areas of mobile and multimedia communications as well as on distance learning. In summer 2003, a seminar has been organized on Pochtemalp with the goal to present and discuss intensively the state of the research work performed by the research group's Ph.D. students. External international experts working in related research areas have been invited in order to contribute to the discussions and to present their current and future research areas. Each speaker had 90 minutes time for his presentation including discussion. The overall results have been very positive. In particular, the discussions have been very intensive and productive and should be valuable for the Ph.D. students future work. This reports summarizes the various talks from research group members and external experts.
Frank Buchli
Abstract
Download
Redocumentation and design recovery are two important areas of reverse engineering. Detection of recurring organizations of classes and communicating objects, called Software Patterns, supports this process. Many approaches to detect Software Patterns have been published in the past years. Most of these approaches need a pattern library as reference. Personal coding style and domain specific requirements lead to creating new patterns or adapting existing ones and make those approaches fail. The second problem is that the found patterns of those methods are presented without connection to the other patterns. To gain an overview of the whole system and its mechanisms, we propose to set the patterns in relation each other. Our work shows a method to detect Software Patterns using Formal Concept Analysis (FCA). The advantage of this approach is that no reference library is needed and the results are set in relation each other. FCA is a mathematical theory which detects the presence of groups of classes which instantiate a common, repeated pattern. Those found patterns are presented in a lattice, a partial order relation among the patterns, which allows us to explore the pattern which are in relation to them. We implemented a prototype tool ConAn PaDi which navigates with the Fish Eye View technique over the patterns. For validation we applied this tool to three mid-sized Smalltalk applications.
Markus Gälli
Abstract
Download
While assertions of "Design by Contract" from Eiffel found its way into the language-definitions of Python and of Java SDK 1.4, current object-oriented languages do not make the concepts of unit-testing explicit in their definitions or meta-models. Not having support of unit-testing in a programming language makes it harder to compose and re-compose test-scenarios and tests. We propose, that an object-oriented language should include explicit concepts for example objects and example methods. This concepts ease the composition of complex test-scenarios, they help to refactor the program with the tests and also to keep the duration of the tests as low and the coverage of the tests as high as possible.
Stephane Ducasse
Abstract
Download
Reengineering object-oriented applications is becoming a vital activity in today industry where the developer turnover drains the system oral memory out of the systems themselves and where applications should constantly evolve to meet new requirements. This document summarizes the research effort led on reverse engineering and reengineering object-oriented legacy systems. It includes (1) the definition of a suitable meta-model for reengineering, FAMIX. This meta-model, even if flat, supports both reverse engineering and code refactoring analysis, (2) the presentation of a reengineering platform, MOOSE, (3) the evalution of software metrics for reengineer, (4) the definition of simple visual techniques to support large system understanding or finer grain code element, (5) the identification and cure support for duplicated code, (6) the use of dynamic information to support composable views and collaboration extraction, and (7) the identification of reengineer patterns.
Stephane Ducasse, Nathanael Schaerli, and Roel Wuyts
Abstract
Download
As object-oriented programmers, we are trained to capture common properties of objects in classes that can be reused. Similarly, we would like to capture common properties of classes in metaclass properties that can be reused. This goal has led researchers to propose models based on explicit metaclasses, but this has opened Pandora's box leading to metaclass composition problems. Numerous approaches have been proposed to fix the problem of metaclass composition, but the composition of conflicting properties was always resolved in an adhoc manner. Our approach uses traits, groups of methods that act as a unit of reuse from which classes are composed, and represent metaclass properties as traits. Metaclasses are then composed from these traits. This solution supports the reuse of metaclass properties, and their safe and automatic composition based on explicit conflict resolution. The paper compares existing models for composing metaclass properties, discusses traits and our solution, and shows some concrete examples implemented in the Smalltalk environment Squeak.
IAM
Download
Oscar Nierstrasz
Abstract
Download
Real software systems are open and evolving. It is a constant challenge in such environments to ensure that software components are safely composed in the face of changing dependencies and incomplete knowledge. To address this problem, we propose a new kind of type system which allows us to infer not only the type provided by a software component in an open system, but also the type it requires of its environment, subject to certain constraints. The contractual type we infer for components can then be statically checked when components are composed. To illustrate our approach, we introduce the form calculus, a calculus of explicit environments, and we present a type system that infers types for form expressions.
Oscar Nierstrasz and Franz Achermann and Stefan Kneubuehl
Abstract
Download
Piccola is small, experimental composition language - a language for building applications from software components implemented in another, host programming language. This document describes JPiccola, the implementaion of Piccola for the Java host language.
Marc-Alain Steinemann and Attila Weyland and Jacques Viens and Torsten Braun
Abstract
Download
This publication describes the didactics and design requirements for a hands-on session oriented Internet-based course. The guide describes the general introduction for the course as well as the four sections of each of the content modules. With this guide it is possible to build-up Internet-based courses without prior knowledge in didactics or in design. The guide can be easily adapted to new situations.
Marc Heissenbüttel and Torsten Braun
Abstract
Download
We present a novel routing algorithm (called BLR: Beaconless Routing Algorithm) for mobile ad-hoc networks in which nodes are not required to have information about neighboring nodes, neither about their positions nor even about their existence. If a node has a packet to transmit, it just broadcasts the packet. Only nodes within a certain area are potentially allowed to forward the packet. Each of these nodes introduces an additional delay depending on its position before forwarding the packet. One node eventually transmits first; the other nodes detect this subsequent forwarding of the same packet and abandon their scheduled transmission. BLR especially avoids beaconing, a broadcast mechanism to inform neighbors of a node's position, which exists to our knowledge in all other position-based routing algorithms. These mechanisms use scarce battery power, interfere with regular data transmission, and degrade the performance of the network. Analytical results and simulation experiments indicate that BLR can provide efficient and battery-conserving routing in mobile ad-hoc networks.
Andrew Black and Nathanael Schärli and Stéphane Ducasse
Abstract
Download
Traits are a programming language technology modeled after mixins but avoiding their problems. In this paper we refactor the Smalltalk collections hierarchy using traits. We observed that the original hierarchy contained much duplication of code; traits let us remove all of it. Traits also make possible much more general reuse of collection code outside of the existing hierarchy; for example, they make it easy to convert other collection-like things into true collections. Our refactoring reduced the size of the collection hierarchy by approximately 12 per cent, with no measurable impact on execution efficiency. More importantly, understandability and reusability of the code was significantly improved, and the path was paved for a more intensive refactoring.
IAM
Download
Nathanael Schärli and Stéphane Ducasse and Oscar Nierstrasz and Andrew Black
Abstract
Traits are a simple compositional tool for structuring classes in object-oriented programs. Traits are essentially groups of methods that serve as the primitive units of code reuse. This paper presents an abstract calculus for classes using single inheritance, for traits, and for an extended inheritance operation that uses a trait in addition to inheriting from a superclass. It also defines four combinators that can be used to build traits from simpler traits. The semantics of classes is defined in terms of a simple set-theoretic model. Using this model, we give the meaning of a class that is built from traits by exhibiting an equivalent class that is trait-free. This transformation does not modify any method in the class. From this we prove our main theorem: that the use of traits has no semantics, that is, it enhances understandability and reuse without adding complexity to the execution model or changing the language semantics.}
Nathanael Schärli and Stéphane Ducasse and Oscar Nierstrasz and Andrew Black
Abstract
Download
Inheritance is the fundamental reuse mechanism in object-oriented programming languages; its most prominent variants are single inheritance, multiple inheritance, and mixin inheritance. In the first part of this paper, we identify and illustrate the conceptual and practical reusability problems that arise with these forms of inheritance. We then present a simple compositional model for structuring object-oriented programs, which we call traits. Traits are essentially groups of methods that serve as building blocks for classes and are primitive units of code reuse. In this model, classes are composed from a set of traits by specifying glue code that connects the traits together and accesses the necessary state. We demonstrate how traits overcome the problems arising with the different variants of inheritance, we discuss how traits can be implemented effectively, and we summarize our experience applying traits to refactor an existing class hierarchy.
The research group on "Computer Networks and Distributed Systems" of the Institute of Computer Science and Applied Mathematics at the University of Berne led by Prof. Torsten Braun focuses the research activities in the areas of mobile and multimedia communications as well as on distance learning. In summer 2002, a seminar has been organized in Ticino with the goal to present and discuss intensively the state of the research work performed by the research group's Ph.D. students. External international experts working in related research areas have been invited in order to contribute to the discussions and to present their current and future research areas. Each speaker had 90 minutes time for his presentation including discussion. The overall results have been very positive. In particular, the discussions have been very intensive and productive and should be valuable for the Ph.D. students future work. This reports summarizes the various talks from research group members and external experts.
Ruy de Oliveira and Torsten Braun
Abstract
Download
The Transmission Control Protocol (TCP) raises a number of issues when required to work in a wireless environment. In particular, within an ad hoc network, where changes can happen somewhat quickly and unpredictably, it has to deal with new tough challenges such as high probability of both network partition and route failures due to mobility. In order to adapt TCP to this demanding paradigm, some improvements have been proposed. Nevertheless, most of them present limited enhancements as they do not address important related issues. In this paper, we present an overall view on this subject in a detailed analysis of the major factors involved. Specifically, we show how TCP can be affected by mobility and lower layers strategies including path asymmetry. Additionally, we highlight the main features and weaknesses of the existing proposed schemes, and point out the main open issues in this area.
Pascal Habegger and Hanspeter Bieri
Abstract
Download
It is well known that the Internet has experienced a fascinating growth in the past. Unfortunately, precise measurements and statistics of its size and relevant descriptions of its structure are not available. It is the goal of ongoing research to get a better understanding of the Internet topology at different levels of abstraction (e.\,g. at the router-level or AS-level). This would help to design more realistic simulation scenarios when evaluating the performance of network protocols. It would alleviate the search for structural weaknesses in the Internet, would help the design of next-generation Internet protocols, and might even permit to predict the further evolution of the Internet in the near future. This Technical Report gives an assessment of current research results by means of listing and evaluating different important models which attempt to explain the formation of the topology of the Internet as we experience it today. It also contains a survey of major topology generators which are currently used by the network simulation community. It concludes with an outlook at future work and open problems.
Sergei Artemov and Tatiana Yavorskaya (Sidon)
Abstract
Download
The Logic of Proofs solved a long standing Gödel's problem concerning his provability calculus. It also opened new lines of research in proof theory, modal logic, typed programming languages, knowledge representation, etc. The propositional logic of proofs is decidable and admits a complete axiomatization. In this paper we show that the first order logic of proofs is not recursively axiomatizable.
Luca Alberucci and Gerhard Jaeger
Abstract
Download
The notions of {\em common knowledge} or {\em common belief} play an important role in several areas of computer science (e.g. distributed systems, communication), in philosophy, game theory, artificial intelligence, psychology and many other field which deal with interaction withhin a group of "agents", agreement and coordinated actions. In the following we will present several deductive systems for common knowledge above epistemic logics - such as K, T, S4 and S5 - with a fixed number of agents. We focus on structural and proof-theoretical properties of these calculi. }
Tamar Richner and St\'ephane Ducasse
Abstract
Download
Modeling object-oriented applications using collaborations and roles is well accepted. Collaboration-based or role-based designs decompose an application into tasks performed by a subset of the applications' classes. Collaborations provide a larger unit of understanding and reuse than classes, and are an important aid in the maintenance and evolution of the software. The extraction of collaborations is therefore an important issue in design recovery. In this paper we propose an iterative approach which uses dynamic information to support the recovery and understanding of collaborations. We present the problems of extracting such information and describe a tool we have developed to validate our approach.
IAM
Download
Vincenzo Salipante
Abstract
Download
In this paper we prove completeness for classical first order logic of partial terms by making use of \emph{Deduction chains} (see Sch\"{u}tte [5]). This technique has some advantages, as it involves no embedding and hence, in the course of the proof, the role played by partiality is clearly brought out.
Matthias Zimmermann and Horst Bunke
Abstract
Download
This paper presents an automatic segmentation scheme for cursive handwritten text lines using the transcriptions of the text lines and a hidden Markov model (HMM) based recognition system. The segmentation scheme has been developed and tested on the IAM database that contains off-line images of cursively handwritten English text. The original version of this database contains ground truth for complete lines of text only, but not for individual words. With the method described in this paper, the usability of the database is greatly improved because accurate bounding box information and ground truth for individual words (including punctuation characters) is now available as well. Applying the segmentation scheme on 417 pages of handwritten text a correct word segmentation rate of 98\% has been achieved, producing correct bounding boxes for over 25'000 handwritten words.
Matthias Zimmermann and Horst Bunke
Abstract
Download
This report investigates the use of three different schemes to optimize the number of states of linear left-to-right Hidden Markov Models (HMM). As the first method we describe the fixed length modeling scheme where each character model is assigned the same number of states. The second method considered is the Bakis length modeling where the number of model states is set to a given fraction of the average number of observations of the corresponding character. In the third length modeling scheme the number of model states is set to a specified quantile of the corresponding character length histogram. This method is called quantile length modeling. A comparison of the different length modeling schemes has been carried out with a handwriting recognition system using off-line images of cursively handwritten English words from the IAM database. For the fixed length modeling a recognition rate of 61\% has been achieved. Using Bakis or quantile length modeling the word recognition rates could be improved to over 69\%.
Günther Stattenberger and Torsten Braun
Abstract
Download
Despite of the increasing bandwidth, that is available for users, there is a great demand for Quality of Service support in the Internet. Especially new applications like Voice over IP, Video on Demand and Digital Video Broadcast need a reliable bandwidth guarantee and certain delay / jitter values. The Differentiated Services approach is able to provide both, end-to-end bandwidth and delay guarantees to the users offering different services suitable for various applications. Our implementation of Differentiated Services on Linux Routers will be the basis for several research activities to investigate the behaviour of this promising approach to support Quality of Service.
Li Ru and Torsten Braun and Günther Stattenberger
Abstract
Download
Current Differentiated Services (DiffServ) are not yet fully adapted and integrated with mobile environments, especially when Mobile IP \cite{Mobile} is used as the mobility management protocol in the Internet. When a mobile node (MN) visits a foreign network and requests services based on Service Level Agreements (SLAs), the DiffServ Internet Services Providers (ISPs) must care about how to charge the mobile user. In addition, SLAs have to be renegotiated between home / foreign links and their ISPs. Additional authentication, authorization and accounting (AAA) procedures must be provided for this case. This paper proposes a concept to combine the Service Location Protocol (SLP) and the Mobile IP AAA based architecture and presents a new extended AAA architecture which is independent of the availability of a foreign agent (FA) and works for IPv4 or IPv6 in a uniform manner. In this model, the MN becomes truly able to roam throughout the Internet, while needing substantially less administrative overhead. It only needs a password and a NAI to formulate its global passport. We provide a new feasible method of transferring the MN's SLA stored in the home domain to the foreign domain in a more secure and scalable manner. Meanwhile, several new IP options are also proposed in order to allow the MN to flexibly renegotiate the service level. In particular, the architecture supports mobile telephony over packet-based IP networks without having the need for GSM-like mobility management and accounting schemes. To conclude, a detailed specification about Quality-of-Service (QoS) signaling protocol for Mobile IP nodes is included. Once available, the AAA protocol and infrastructure will provide the economic incentive for a wide-ranging deployment of DiffServ in mobile environments.
Torsten Braun
Abstract
Download
This report describes a concept to support scalable multicast communications for small conferencing groups in the Internet. The concept called Multicast for Small Conferences (MSC) has in particular been designed for supporting telephone/audio conferences over the Internet. An important goal is smooth deployment in the Internet. The concept has been significantly been influenced by the Small Group Multicast (SGM) concept and IP Version 6 (IPv6).
M. Günter and M. Brogle and T. Braun
Abstract
Download
SplitPath is a new application for the easy, well-known and provably secure one-time pad encryption scheme. Two problems hinder the one-time pad scheme from being applied in the area of secure data communication: the random generation and the distribution of this random data. SplitPath exploits the flexibility of code mobility in active networks to address these problems. Especially the random generation is studied in more detail.
Ibrahim Khalil and T. Braun
Abstract
As Virtual Leased Line Line (VLL) type point-to-point connections over Internet are now possible with recenctly proposed Expedited Forwarding (EF) Per Hop Behaviour (PHB), Internet Service Providers are looking for ways to sell bandwidth services to prospective coprporate customers with the Bandwidth Broker. Based on this approach most recent implementation consider providing such services in only single provider domain, while in fact, many of the ultimate customers for such services might actually want to extend VLL upto the periphery of other providers. In this paper we describe the implementation of a Bandwidth Broker that uses simple signalling mechanism to communicate with other cooperative Brokers to enable customers to dynamically create VLLs over multiple Diffserv domains.
Ibrahim Khalil and T. Braun
Abstract
As today's network infrastructure continues to grow and Differentiated Services IP backbones are now available to provide various levels of quality of service (QoS) to VPN traffic, the ability to manage increasing network complexity is considered as a crucial factor for QoS enabled VPN solutions. There is growing trend by corporate customers to outsource such complicated management services to Internet Service Providers (ISP) not only to avoid the complexities of VPN establishment and management, but also for economic reasons. In this paper, we present methods to provide end-to-end capacity allocation to VPN connections in a single ISP domain and show the implementation of a Bandwidth Broker managing the outsourced VPNs for corporate customers that have Service Level Agreements (SLAs) with their ISPs. We also present practical configuration examples of commercial routers for enabling QoS enabled VPN tunnels and show how the Bandwidth Broker can dynamically establish tunnels when users send connection requests from the WWW interface.
Sergei Tupailo
Abstract
Download
We define a realizability interpretation of Aczel's Constructive Set Theory \ensuremath{\mathbf{CZF}} into Explicit Mathematics. The final results are that \ensuremath{\mathbf{CZF}} extended by Mahlo principles is realizable in corresponding extensions of \ensuremath{\mathbf{T_0}}, thus providing relative lower bounds for the proof-theoretic strength of the latter.
Sergei Tupailo
Abstract
Download
We define a novel interpretation $\mathcal{R}$ of second order arithmetic into Explicit Mathematics. As a difference from standard $\mathcal{D}$-interpretation, which was used before and was shown to interpret only subsystems proof-theoretically weaker than \ensuremath{\mathbf{T_0}}, our interpretation can reach the full strength of \ensuremath{\mathbf{T_0}}. The $\mathcal{R}$-interpretation is an adaptation of Kleene's recursive realizability, and is applicable only to intuitionistic theories.
Patrik Schnellmann
Abstract
Download
This paper describes a student project dealing with a computer visualization meant to supplement a certain "classic" art exhibition. This exhibition centres on the remaining parts of two very large pictures by Ferdinand Hodler. A PC application has been developed which generates a reconstruction of the original pictures based on old black and white photographs. Additionally, Hodler's pencil drafts of these pictures can interactively be viewed and compared to the final versions. A simple, self-explanatory interface makes the application accessible also to the untrained visitors.
Stefan Fischer and Michael Binkert and Horst Bunke
Abstract
Download
A feature based retrieval scheme for microscopic images of diatoms in an image database is presented in this report. Diatoms are unicellular algae found in water and other places wherever there is humidity and enough light for photo synthesis. Several methods for feature extraction %including symmetry detection are described and experimental results on real diatom images are presented. The proposed feature based retrieval scheme is based on symmetry measures, geometric properties, moment invariants, Fourier descriptors and simple textural features. Based on this features the image database is divided into classes using a decision tree based classification approach. We have evaluated the discriminant power of the features and show experimental results on a diatom image database.
IAM
We show how the charaterization of the polytime functions by Bellantoni and Cook can be extended to characterize any stage of the Grzegorzyk hierarchy above the second, thus proposing an answer to a problem posted by Clote. This is done by allowing an arbitrary fixed number of distinct positions for variables instead of only two as in the original work of Bellantoni and Cook.
Thomas Studer
Abstract
Download
Up to now there was no interpretation available for $\lambda$-calculi featuring overloading and late-binding, although these are two of the main principles of any object-oriented programming language. In this paper we provide a new semantics for a stratified version of Castagna's $\lambda^{\{\}}$, a $\lambda$-calculus combining overloading with late-binding. The model-construction is carried out in $\mathsf{EETJ + (Tot) + (F-I_N)}$, a system of explicit mathematics. We will prove the soundness of our model with respect to subtyping, type-checking and reductions. Furthermore, we show that our semantics yields a solution to the problem of loss of information in the context of type dependent computations.
R. Balmer, F. Baumgartner, T. Braun, M. Günter, I. Khalil
Abstract
Download
This document describes the implementation of a quality-of-service (QoS)-enabled, Internet based, virtual private network(VPN) management system. For QoS mechanisms, the system focuses on scalable Differentiated Services, but the document also describes an implementation that allows the mapping of fine grained Integrated Services QoS mechanisms to Differentiated Services. The security mechanisms of the VPN management system are based on the Internet Protocol security architecture (IPSec).
Manuel Günter
Abstract
Download
The Differentiated Service (DiffServ) architecture for the Internet implements a scalable mechanism for quality-of-service (QoS) provisioning. Bandwidth brokers represent the instances of the architecture, that automate the provisioning of a DiffServ service between network domains. Although several bandwidth broker implementations have been proposed, the alternatives and trade-offs of the different viable approaches of inter-broker communication were not studied up to now. This paper presents the broker signaling trade-offs considered in the context of a DiffServ scenario used by the Swiss National Science Foundation project CATI, and it presents results gathered by simulations.
T. Braun, M. Günter, M. Kasumi, I. Khalil
Abstract
Download
This document describes an architecture how QoS-enabled virtual private networks over the Internet can be built and managed. The basic technologies for secure VPNs and for QoS support are introduced in the first chapter. The second chapter describes our vision of a QoS-enabled VPN service over the Internet. It also discusses in detail the required components and their interactions of an appropriate architecture. Based on this architecture, a demonstrator will be implemented. Chapter 3 presents the simplified implementation scenario and some implementation details in order to achieve secure and QoS-enabled VPNs. The work described in this document is part of a deliverable to the CATI [1] project funded by the Swiss National Science Foundation.
IAM
P. Brambilla
Abstract
Download
This paper describes a calculus for the resource control of Petri net components. In order to calculate the new input/output behavior of a component resulting from a composition we have developed a formalism for a very restricted set of components. We first introduce the class of circular components and define their composition in our formalism. Afterwards we give a calculus with simplification rules for these components and show that each component has a unique normal form.
This paper describes a novel approach to visual speech recognition. The intensity of each pixel in an image sequence is considered as a function of time. One-dimensional wavelet and Fourier transform is applied to this intensity-versus-time function to model the lip movements. We present experimental results performed on two databases of Englich digits and letters, respectively.
Stephane Ducasse
Abstract
Download
In a language like \ST in which objects communicate only via message passing, message passing control is a fundamental tool for the analysis of object behavior (trace, spying) and for the definition of new semantics (asynchronous messages, proxy,...). Different techniques exist, from the well known approach based on the specialisation of the {\tt doesNotUnderstand:} method and the definition of minimal objects to the use of the method lookup algorithm done by the virtual machine. Until now, no comparison between these techniques has been made. In this article we compare the different techniques taking into account the reflective aspects used, the scope, the limit and the cost of the control.
Thien M. HA and Horst Bunke
Abstract
Download
This report describes work on handwritten numeral string recognition. In particular, we address the building of a continuous recognition system based on hidden Markov models and the combination of this system with a previously developed discrete recognition system. We describe experiments with a simple combination scheme, called score summation, that turns out to be capable of improving the recognition rates of both individual systems. Further research directions are also proposed.
We show that a uniform version of the limit axiom does not strengthen the theory UTN of universes, types, and names plus (non-uniform) limes.
Xiaoyi Jiang, Horst Bunke
Abstract
Download
In this paper we present a novel edge detection algorithm for range images based on a scan line approximation technique. Compared to the known methods in the literature, our algorithm has a number of advantages. It provides edge strength measures that have a straightforward geometric interpretation and supports a classification of edge points into several subtypes. We give a definition of optimal edge detectors and compare our algorithm to this theoretical model. By simulations we show that our algorithm has a near-optimal performance. We have carried out extensive tests using real range images acquired by three range scanners with quite different characteristics. The good results that were achieved demonstrate the robustness of our edge detection algorithm.
A. Heuerding and S. Schwendimann
Abstract
Download
A lot of methods have been proposed (and sometimes implemented) for proof search in the propositional modal logics $\K$, $\KT$, and $\Sfour$. It is difficult to compare the usefulness of these methods in practice, since mostly only the execution times for a few simple formulas have been published. We try to improve this unsatisfactory situation by presenting a set of benchmark formulas. Note that we do not just list formulas, but give a method that allows to compare different provers today and in he future. As a starting point we give the results we obtained when we applied this benchmark method to the Logics Workbench (\LWB). Moreover we hope that the discussion of postulates concerning benchmark tests for automated theorem provers help to obtain improved benchmark methods for other logics, too.
Thien M. Ha and H. Bunke
Abstract
Download
This report presents a new approach to off-line handwritten numeral recognition. From the concept of perturbation due to writing habits and instruments, we propose a recognition method which is able to account for a variety of distortions due to eccentric handwriting. The new approach constitutes a shift from the usual pattern recognition paradigm where normalisation is the first step prior to feature extraction and classification. Normalisation is replaced by a set of perturbation processes modelling writing habits and instruments. As a result, the subsequent operations -- feature extraction and classification -- are independently applied for each perturbation type yielding a set of results that are eventually combined. We tested our method on two worldwide standard databases of isolated numerals, namely, CEDAR and NIST, and obtained $99.09\%$ and $99.54\%$ correct recognition rates at no-rejection level, respectively. The latter result was obtained by testing on more than 170000 numerals.
Due to strictness problems, usually the syntactical definition of Frege structures is conceived as a truth theory for total applicative theories. To investigate Frege structures in a partial framework we can follow two ways. First, simply by ignoring undefinedness in the truth definition. Second, by introducing of a certain notion of pointer. Both approaches are compatible with the traditional formalizations of Frege structures and preserve the main results, namely abstraction and the proof-theoretic strength.
B.T. Messmer and H. Bunke
Abstract
Download
In this paper we present a fast algorithm for the computation of error-correcting graph isomorphisms. The new algorithm is an extension of a method for exact subgraph isomorphism detection from an input graph to a set of a priori known model graphs, which was previously developed by the authors. Similarly to the original algorithm, the new method is based on the idea of creating a decision tree from the model graphs. This decision tree is compiled off-line in a preprocessing step. At run time, it is used to find all error-correcting graph isomorphisms from an input graph to any of the model graphs up to a certain degree of distortion. The main advantage of the new algorithm is that error-correcting graph isomorphism detection is guaranteed to require time that is only polynomial in terms of the size of the input graph. Furthermore, the time complexity is completely independent of the number of model graphs and the number of edges in each model graph. However, the size of the decision tree is exponential in the size of the model graphs. Nevertheless, practical experiments have indicated that the method can be applied to graphs containing up to 16 vertices.
Thien M. Ha, G. Kaufmann, and H. Bunke
Abstract
Download
This report discusses several general aspects of text localisation and handwriting recognition. In particular, we consider their applications to extraction and recognition of numerals written on Giro forms. For text localisation, we investigate the problem of extracting text printed or written inside boxes on forms. We review a number of representative methods for solving this problem, describe the implementation of one of them, and present some experimental results obtained on real data. For handwriting recognition, we first present the general methodology, including a new approach called perturbation method. Then we explain how the general methodology is applied to three subproblems in handwriting recognition, namely, the recognition of isolated numerals, that of numeral strings, and the recognition of cursively handwritten words drawn from a small lexicon. Experimental results show that our systems are either equivalent to or better than state-of-the-art systems.
We investigate universes axiomatized as sets with natural closure conditions over Frege structures. In the presence of a natural form of induction, we obtain a theory of proof-theoretic strength \Gamma_0.
Building large Optical Character Recognition (OCR) databases is a time-consuming and tedious work. Moreover, the process is error-prone due to the difficulty in segmentation and the uncertainty in labelling. When the database is very large, say one million patterns, human errors due to fatigue and inattention become a critical factor. This report discusses one method to alleviate the burden caused by these problems. Specifically, the method allows an automatic detection of abnormalities, e.g. mislabelling, and thus may contribute to clean up a labelled database. The method is based on a recently proposed optimum class-selective rejection rule. As a test case, the method is applied to the NIST databases containing nearly 300'000 handwritten numerals.
This report reviews various class-selective rejection rules for pattern recognition. A rejection rule is called class-selective if it does not reject an ambiguous pattern from all classes but only from those classes that are most unlikely to issue the pattern. Both optimal and suboptimal rules, e.g. top-n ranking, are considered. Experimental comparisons performed on the recognition of isolated numerals from the NIST databases show that the optimal class-selective rejection rule is actually better than two other heuristic rules.
This report reviews various optimum decision rules for pattern recognition, namely, Bayes rule, Chow's rule (optimum error-reject tradeoff), and a recently proposed class-selective rejection rule. The latter provides an optimum tradeoff between the error rate and the average number of (selected) classes. A new general relation between the error rate and the average number of classes is presented. The error rate can directly be computed from the class-selective reject function, which in turn can be estimated from unlabelled patterns, by simply counting the rejects. Theoretical as well as practical implications are discussed and some future research directions are proposed.
Dr. J. Grabowski
Abstract
Download
This technical report provides a theoretical basis for the implementation of a TTCN test case simulator. In order to base the implementation on existing code, an automata based model has been chosen, i.e., in the model a TTCN test case shows an automaton like behavior. The procedure is the following: TTCN test cases are transformed into an internal structure, which afterwards can be interpreted by using the provided enabling conditions and execution rules. The model covers control and data flow aspects of normal and concurrent TTCN.
This report presents the implementation of the "Generic Synchronization Policies" (abbreviated as GSP) using the language Pict. The main goal of this work was to see how well suited Pict is for implementing higher level abstractions. The remainder of this report is structured as follows: Section 2 briefly introduces the GSP concept. Pict and its object model are presented in section 3. The implementation of GSP is the heart of section 4. Finally, Section 5 mention future possible works.
J.-G. Schneider, M. Lumpe
Abstract
Download
For the development of present-day applications, programming languages supporting high order abstractions are needed. These high order abstractions are called components. Since most of the currently available programming languages and systems fail to provide sufficient support for specifying and implementing components, we are developing a new language suitable for software composition. It is not clear how such a language will look like, what kind of abstractions it must support, and what kind of formal model it will be based on. Object-oriented programming languages address some of the needs of present-day applications, and it is therefore obvious to integrate some of their concepts and abstractions in the language. As a first step towards such an integration, we have to define an object model. Since no generally accepted formal object model exists, we have chosen the Pi-calculus as a basis for modelling. In order to find a suitable object model, we have built up an object modelling workbench for PICT, an implementation of an asynchronous Pi-calculus. In this work, we define a first abstract object model, describe several implementations of the object model in PICT, and discuss interesting features and possible extensions.
E. Dubuis
Abstract
The topic of this paper are visibility computations for global illuminations with the Radiosity method. The heart of radiosity is the determination of the energy exchange between polygons in a closed environment which depends on geometric relations between the polygons -- expressed by the form factor. The most extensive part of the form factor computation is the visibility term. Recent developments have shown the cluster hierarchy to be used in accelerating these computations. A cluster of polygons -- or patches -- is taken as an isotropic scattering volume with a density and therefore an extinction factor which is defined by the total area of the surface elements. The new technique proposed in this paper is the computation of a direction dependent attenuation for each cluster and the acceleration of the visibility determination with these attenuation values. This new technique requires a preprocessing step, in which a cluster's environment is subdivided into sectors for which the attenuation is computed. The resulting cluster attenuations are used during the actual illumination process to determine the visibility between two points of interrest.
Bernard Achermann and Horst Bunke
Abstract
Download
This report is divided into two parts. In the first part we present an overview of sensor fusion. We analyze the single steps in a fusion process and describe in a systematic way several methods for combining pattern classifiers. The second part describes some practical experiments with classifier combination in the field of face recognition. We investigate the impact of decision combination on the recognition rate. Several combination strategies and their results are compared. The face classifiers used for the practical experiments are shortly presented, too. We used two full face classifiers (HMMs, eigenfaces) and a profile classifier (shape comparison). Thus, the combination is based on two completely different information sources.
H. Bunke, R. Liviero
Abstract
Download
In this paper an error-correcting parsing algorithm and its application to a postprocessing task in the context of automatic check processing is described. The proposed method has shown very good results in terms of recognition accuracy and execution speed on both real and synthetic data.
Today's graphical user interfaces simulate a perfect world. Windows have always the same look and buttons have identical shapes and colors and are aligned to regular grids. The only distinction between graphical elements is their label. The current visual design of user interfaces ignores the capabilities of modern displays for an effective visual coding and the resulting uniformness makes it very difficult for the user to recognize graphical elements at first sight. This paper presents some ideas and propositions for the visual design and the 'look \& feel'-concept of next generation user interfaces. New concepts for the layout and the visual coding of graphical elements are proposed. The suggested new layout and coding rules should help the user to recognize and remember graphical elements easily and to distinguish them from other elements of the same type.
IAM
J. Grabowski, R. Scheurer, D. Toggweiler, D. Hogrefe
Abstract
Download
The treatment of complexity is one of the main problems in the area of validation and verification of formally specified protocols. The problem can be tackled by using heuristics , partial order simulation methods , and optimization strategies . We present these mechanisms and describe the way they work. Their usefulness will be discussed by presenting the results of some experiments.
The concept of rejection is extended to that of class-selective rejection. That is, when an input pattern cannot be reliably assigned to one of the $N$ classes in a $N$-class problem, it is assigned to a subset of classes that are most likely to issue the pattern, instead of simply rejecting the pattern. First, a new optimality criterion is appropriately defined to accommodate the newly introduced decision outcomes. Then, a new decision rule is derived and its optimality proven. Various upper-bounds on error rate are obtained. Finally, some examples are provided to illustrate the differences between the optimum rule and other heuristic rules, such as top-n ranking.
M. Menna
Prof. Dr. D. Hogrefe
Abstract
This report contains the documentation of the validation of the INRES system. INRES has a protocol and a service specification and has been developed for illustrative purposes. The validation has been done with the SDT 3.0 tool based on SDL specifications of the protocol and the service. In particular the Bit-State exploration proved very useful. It generated mainly reports on implicitely consumed signals which in some cases point to problems in the specification. The problems were traced by use of the MSC tool which generates the specific sequence of events to the implicit signal consReport on the validation of the Inres Systemumption. As a result, one error could be found in the specifications and one more general improvement of the protocol and service design was made.
The focus of this paper lies on the proof-theory for extensions of Logic Programming in which it is possible to draw negative conclusions both in a direct and in an indirect way. These extensions are provided with a rule-based deductive system in the sense of the work of J\"{a}ger \cite{Jaeger94} for Normal Logic Programs. Rule-based deductive systems can be used as a powerful tool to study the structural properties of the logic programming languages. Furthermore, in the deductive systems the fundamental semantical properties of the languages can be formalised by proof-rules of the systems. Therefore, different extensions of logic programming can be compared by comparing their deductive systems.
We consider logic programming languages in which it is possible to derive negative information both through a direct and through an indirect derivation mechanism. The isolated direct derivation steps are monotone and the indirect are based on the Closed World Assumption and therefore nonmonotone. The semantics of the direct and indirect negative conclusions have separately been studied extensively, as can be seen from the work on Definite Logic Programs with classical negation and the semantics of Normal Logic Programming. However, by combining both direct and indirect inferences of negative conclusions in one system, the semantics can no longer be monotone, because of the nonmonotone derivation mechanism of the indirect derivations. On the other hand, it can no longer be the same nonmonotone derivation mechanism that corresponds to the indirect derivations, since there are additional mechanisms to derive falsity. We compare two approaches, one in which two different interpretations of negative conclusions are considered, the other in which negative conclusions are interpreted uniformly.
W. Senn, K. Wyler, H.P. Clamann, J. Kleinle, M. Larkum, H.-R. Luescher, L. Mueller, J. Streit, K. Vogt, T. Wannier
Abstract
Download
The size principle implies that the motoneurons of a muscle pool are activated in ascending order of their sizes when that pool of motoneurons receives a common, increasing input. We suggest a simple discrete Lagrangian for an isometrically contracting skeletal muscle. Minimizing the time integral of this Lagrangian leads to recruitment of motor units according to increasing size.
B.T. Messmer and H. Bunke
Abstract
Download
In this paper, we propose a new approach to the problem of subgraph isomorphism detection. The new method is designed for systems which differentiate between graphs that are a priori known, so-called {\em model graphs}, and unknown graphs, so-called {\em input graphs}. The problem to be solved is to find a subgraph isomorphism from an input graph, which is given on-line, to any of the model graphs. The new method is based on an intensive preprocessing step in which the model graphs are used to create a {\em decision tree}. At run time, the input graph is then classified by the decision tree and all model graphs for which there exists a subgraph isomorphism from the input graph are detected. If we neglect the time needed for preprocessing, the computational complexity of the new subgraph isomorphism algorithm is only quadratic in the number of input graph vertices. Furthermore, it is independent of the number of model graphs and the number of edges in any of the graphs. However, the decision tree that is constructed in the preprocessing step may grow exponentially with the number of vertices of the model graphs. Therefore, we present several pruning techniques which aim at reducing the size of the decision tree. A computational complexity analysis of the new method is given. Also, the advantages and disadvantages of the new algorithm are studied in a number of practical experiments with randomly generated graphs. Finally, the application of the algorithm in a graphic symbol recognition system is briefly discussed.
Sergei Art\"emov and Vladimir Krupski
Abstract
Download
We introduce {\em reference structures} -- a basic mathematical model of a data organization capable to store and utilize information about its addresses. A propositional labeled modal language is used as a specification and programming language for reference structures; the satisfiability algorithm for modal language gives a method of building and optimizing reference structures satisfying a given formula. Corresponding labeled modal logics are presented, supplied with cut free axiomatizations, completeness and decidability theorems are proved. Initialization of typed variables in some programming languages is presented as an example of a reference structure building.
Reinhard Kahle
Abstract
Download
In this paper we study the relationship between forms of weak induction in theories of operations and numbers. Therefore we investigate the structure of the natural numbers. Introducing a concept of $N$-strictness, we give a natural extension of the theorie BON, which implies the equivalence of operation and $N$ induction. In addition we show that in the presence of the non-constructive $\mu$-operator the above equivalence is provable without this extension.
Sergei Artemov and Vladimir Krupski
Abstract
Download
In this report reference structures are introduced. These structures provide a basic logical model of a computer memory organization capable to store and utilize information about its addresses. The corresponding labeled modal logics are axiomatized and supplied with the completeness and decidability theorems. A labeled modal formula can be regarded as a description of a certain reference structure; the satisfiability algorithm gives a method of building and optimizing reference structures satisfying a given formula. }
Gerhard Jäger and Thomas Strahm
Abstract
Download
In this paper we study applicative theories of operations and numbers with (and without) the non-constructive minimum operator in the context of a {\it total}\/ application operation. We determine the proof-theoretic strength of such theories by relating them to well-known systems like Peano Arithmetic $\mathop{\mbox{\it PA}}$ and the system $\mathop{(\Pi_\infty^0\mbox{\it -CA})}\nolimits_{<\varepsilon_0}$ of second order arithmetic. Essential use will be made of so-called fixed-point theories with ordinals, certain infinitary term models and Church Rosser properties.} NOTE = {To appear in {\it Annals of Pure and Applied Logic}
Jean-Guy Schneider, Edgar F.A. Lederer, Peter Schwab
Abstract
Download
The solution of large sparse systems of linear equations is one of the most computationally intensive parts of finite element simulations. In order to solve these systems of linear equations, we have implemented a parallel conjugate gradient solver on the SPMD-programmable MUSIC-system. We outline the conjugate gradient method, give a formal specification in Maple, and describe a data-parallel program. We illustrate how the number of processors influences the speed of convergence due to different data distributions and the non-associativity of the floating point addition. We investigate the speed of convergence of the conjugate gradient method for different floating point precisions (32, 44, 64, and 128 bit) and various finite element models (linear beams, human spine segments). The results show that it is more important to concentrate on appropriate numerical methods depending on the finite element models considered than on the floating point precision used. Finally, we give the results of our speedup measurements.
Xiaoyi Jiang and Horst Bunke
Abstract
Download
This paper deals with the detection of symmetries of polyhedra and its applications. It consists of two parts. First, we review seven algorithms for symmetry detection of polyhedra. Since these algorithms supply symmetry information in quite different forms, we classify the three most common output forms of the symmetry detection algorithms and discuss their relationships and the transformation from one form into another. For each algorithm, the following five aspects are considered: the output form, the computational complexity, the polyhedra class the algorithm can handle, the implementation, and the suitability for solving the related polyhedral congruity problem. Then, we compare the seven symmetry detection algorithms and give some recommendations as to which algorithm to choose for particular applications. In the second part of this paper we discuss some applications of symmetry information in robotics, geometric problem-solving, and computer vision. The conclusions of this review are twofold. On the one hand, symmetries can be very useful to resolve ambiguities or increase computational efficiency in problem- solving processes involving geometric objects. On the other hand, simple and efficient symmetry detection algorithms are available now. This makes the symmetry exploration a practical issue for many applications.
Thien M. HA and Horst BUNKE
Abstract
Download
This report discusses several general aspects of text localisation and handwriting recognition. In particular, we consider their applications to extraction and recognition of numerals written on Giro forms. For text localisation, we investigate the problem of extracting text printed or written inside boxes on forms. We review a number of representative methods for solving this problem, describe the implementation of one of them, and present some experimental results obtained on real data. For handwriting recognition, we first present the standard general methodology, then apply it to handwritten numeral recognition, and finally propose a new approach, called perturbation-based, that allowed us to boost the recognition rate up to $99.1\%$ on a worldwide standard database.
E. Dubuis, Prof. H. Bieri
Abstract
Download
This paper presents a fast radiosity algorithm for illuminating scenes containing large piecewise polygonal surfaces. Dynamic Subdivision is based on the well known Adaptive Subdivision, introduced by M.Cohen et al.. During the illumination process, patches in areas with high intensity gradients are refined. Contrary to Adaptive Subdivision, the presented algorithm subdivides patches not in a static way. The patch hierarchy is dynamic and adapted to the respective state of the illumination process. To take a decision concerning patch subdivision, more information about the gathered energy of a patch is considered than with Adaptive Subdivision. The results show that this new algorithm can lead to remarkable speedups compared to Adaptive Subdivision.
Prof. H. Bieri, P-M. Schmidt
H. Bunke, M. Roth, E.G. Schukat-Talamazzini
Abstract
Download
A method for the off-line recognition of cursive handwriting based on Hidden Markov Models (HMMs) is described. The features used in the HMMs are based on the arcs of skeleton graphs of the words to be recognized. An average correct recognition rate of over 98% on the word level has been achieved in experiments with cooperative writers using two dictionaries of 150 words each. CR Categories and Subject Descriptors: I.4: Image Processing; I.5: Pattern Recognition. General Terms: Algorithms.
Roland Robmann
Abstract
Download
In Tiefenbildern, die durch das kodierte Lichtverfahren erzeugt wurden, treten praktisch immer auch Schattenregionen auf. Werden Kanten aus korrespondierenden Grauwert- und Tiefenbildern extrahiert und integriert, lassen sich Schattenbegrenzungen als Schattenkanten und innerhalb von Schattenregionen detektierte Kanten als Intensitaetskanten klassifizieren. In diesem Bericht wird ein Verfahren vorgestellt, das Schattenkanten als geometrisch bzw. nichtgeometrisch interpretiert und gegebenenfalls klassifiziert sowie Intensitaetskanten klassifiziert und lokalisiert. Dazu wird zuerst ein attributierter Bildgraph der Szene erstellt und dieser im Raum lokalisiert (Raumgraph). Anschliessend erfolgt die Interpretation einerseits durch Verifikation von Projektionslinien, andererseits durch Anwendung von Interpretationsregeln auf Knotenkonfigurationen in Linienzeichnungen (line drawings). Nach der Nachverarbeitung wird der Raumgraph durch das erweiterte Flaechenmodell dargestellt. CR Categories and Subject Descriptors: I.4.6 [Image Processing]: Segmentation; I.4.8 [Image Processing]: Scene Analysis. General Terms: Algorithms. Additional Key Words: Range data, sensor fusion, edge and feature detection, interpretation, line drawings.
Rolf Oppliger
Abstract
Download
Für den Einsatz in Computer- und Kommunikationsnetzen stehen heute eine ganze Reihe von Authentifikations- und Schlüsselverteilsystemen zur Auswahl. In diesem Bericht werden Kerberos, NetSP, SPX, TESS und SESAME vorgestellt, diskutiert und miteinander verglichen. Keywords = Network Security, Authentication, Key Distribution
Jens Grabowski
Abstract
Download
This technical report summarizes the results of the research and development project 'Conformance Testing -- A Tool for the Generation of Test Cases'. Within this project we developed a method for the automatic generation of test cases based on formal specifications and formally defined test purposes. The method is called SAMSTAG. It is implemented in the SAMSTAG tool. The report starts with a short introduction (Section 1). Then the standardized conformance testing procedure, in the following abbreviated by CTMF/FMCT, is compared with other test case generation methods (Section 2). Afterwards, the SAMSTAG method (Sections 4, 5) and the SAMSTAG tool are introduced (Section 6). In the last section the formal aspects of CTMF/FMCT, other test case generation methods and the SAMSTAG method are summarized (Section 7). This summary provides a possibility for a complete formal explanation of the entire conformance testing procedure.
Jens Grabowski
Abstract
Download
In 1992 and 1993 the University of Berne cooperates with the Siemens-Albis AG Zürich in order to develop a method which allows to generate complete TTCN test cases from MSC descriptions. The goal is reached by extending the MSC language with a few new language constructs, relating MSCs and data descriptions, and developing the algorithms for the TTCN generation. The method is implemented by a set of prototype tools. The paper starts with a short introduction (Section 1). Then the current procedure of conformance testing is examined (Section 2). The extensions of the standardized MSC language are described and the specification of test cases with MSCs is shown (Section 3). The algorithm for the generation of TTCN behavior descriptions is sketched (Section 4) and MSCs are related to data descriptions (Section 5). The whole method is summarized and a set of prototype tools which implements the method is presented (Section 6). Finally, a short outlook is given (Section 7).
S. Leue and Ph. Oechslin
Abstract
Download
We propose a formalized method that allows to automatically derive an optimized implementation from the formal specification of a protocol. Our method starts with the SDL specification of a protocol stack. We first derive a data and control flow dependence graph from each SDL process. Then, in order to perform cross-layer optimizations we combine the dependence graphs of different SDL processes. Next, we determine the common path through the multi-layer dependence graph. We then parallelize this graph wherever possible which yields a relaxed dependence graph. Based on this relaxed dependence graph we interpret different optimization concepts that have been suggested in the literature, in particular lazy messages and combination of data manipulation operations. Together with these interpretations the relaxed dependence graph can be be used as a foundation for a compile-time schedule on a sequential or parallel machine architecture. The formalization we provide allows our method to be embedded in a more comprehensive protocol engineering methodology. {\bf CR Categories and Subject Descriptors:} C.1.2 [Processor Architectures]: Multiple Data Stream Architectures (Multiprocessors); C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.1 [Software Engineering]: Requirements/Specification; D.2.2 [Software Engineering]: Tools and Techniques; D.3.4 [Programming Languages]: Processors.\\ {\bf Additional Keywords:} Specification and Description Language (SDL); Data- and Control Flow Analysis; High Speed Protocols; Parallel Protocol Implementation; Protocol Optimization.
Daniel Toggweiler
Abstract
Download
Konformit"atstests sind die Basis f"ur eine harmonische Zusammenarbeit von Kommunikationsprodukten. Mit Hilfe von Testf"allen werden Implementierungen gegen Protokollspezifikationen in wichtigen Aspekten auf Konformit"at getestet. Die Generierung von Testf"allen ist ein grosses Problem. Im Rahmen des Forschungsprojekts Conformance Testing - Ein Tool zur Generierung von Testf"allen wurde an der Universität Bern das Tool samstag zur automatischen Testfallgenerierung entwickelt. SaMsTaG ist die Abk"urzung f"ur SDL and MSC based Test Case Generator. SaMsTaG basiert auf der Simulation einer SDL Spezifikation und eines MSCs. Es werden Systemabl"aufe generiert, die bestimmmte Bedingungen erf"ullen. Dieser Bericht beschreibt den Bau eines Simulationstools f"ur SDL Spezifikationen. Das Simulationstool wurde im Rahmen des obigen Projekts entwickelt. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging
Daniel Toggweiler and Robert Nahm
Abstract
Download
Conformance testing is the basis for the harmonical cooperation of communication products. By means of test suites implementations are tested against protocol specifications in their most important parts for conformance. The generation of test suites is a big problem. Within the research project Conformance Testing -- A tool for the Generation of Test Cases (Contract number 233/257, funded by Swiss PTT) at University of Berne, the test case generator SaMsTaG has been developed. SaMsTaG is the abbreviation for 'Sdl And Msc baSed Test cAse Generator'. SaMsTAG generates a single test case for a protocol specification given by an SDL description and a test purpose given by an MSC. The scope of this paper is to demonstrate the application of SaMsTaG. As an example the Inres protocol is chosen. We generate a test suite for the Initiator of the Inres protocol by means of SaMsTaG. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging }
In this paper a new algorithm for string edit distance computation is proposed. It is based on the classical approach [11]. However, while in [11] the two strings to be compared may be given online, our algorithm assumes that one of the two strings to be compared is a dictionary entry that is known a priori. This dictionary word is converted, in an off-line phase to be carried out beforehand, into a special type of deterministic finite state automaton. Now, given an input string corresponding to a word with possible OCR errors and the automaton derived from the dictionary word, the computation of the edit distance between the two strings corresponds to a traversal of the states of the automaton. This procedure needs time which is only linear in the length of the OCR word. It is independent of the length of the dictionary word. Given not only one but $N$ different dictionary words, their corresponding automata can be combined into a single deterministic finite state automaton. Thus the computation of the edit distance between the input word and each dictionary entry, and the determination of the nearest neighbor in the dictionary need time that is only linear in the length of the OCR word. Particularly, the time complexity is independent of the size of the dictionary. Categories and Subject Descriptors: I.7.0 [Text Processing]: General; I.7.1 [Text Processing]: Text Editing; H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing - dictionaries General Terms: algorithms, computational complexity Additional Key Words and Phrases: string edit distance, finite state automaton, nearest neighbor search, dictionary lookup.
Jens Grabowski and Robert Nahm and Andreas Spichiger and Dieter Hogrefe
Abstract
Download
Für das Konformitätstesten von OSI Protokollen ist die automatische Generierung von Testfällen aus formalen Spezifikationen ein bisher nur unzureichend gelöstes Problem. Von wissenschaftlichen Institutionen wurden zwar eine ganze Reihe von Methoden zur Testfallgenerierung entwickelt; sie sind jedoch in der Praxis kaum einsetzbar. Zum einen sind reale Systeme häufig so komplex, dass sie mit diesen Methoden nicht mehr testbar sind. Zum anderen hat man sich bei der Methodenentwicklung nur wenig an dem in der Praxis üblichen Vorgehen beim OSI Konformitätstesten orientiert. Dieses Vorgehen ist insbesondere im internationalen ISO/IEC Standard 9646 (ISO/IEC IS 9646) \cite{IS9646:92} beschrieben. Mit der von uns entwickelten SAMSTAG Methode lassen sich ebenfalls Testfälle für Konformitätstests generieren. Im Gegensatz zu anderen Methoden orientiert sich die SAMSTAG Methode eng an ISO/IEC IS 9646. In diesem Artikel werden die angesprochenen Probleme ausführlich diskutiert und die SAMSTAG Methode eingeführt. Zusätzlich zeigen wir, wie sich ISO/IEC IS 9646, die in wissenschaftlichen Institutionen entwickelten Methoden und die SAMSTAG Methode bei einer vollständigen formalen Erklärung des OSI Konformitätstestens ergänzen können. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging
Bruno T. Messmer and H. Bunke
Abstract
Download
In this paper a new approach to exact and inexact graph matching is introduced. We propose a compact network representation for graphs, which is capable of sharing identical subgraphs of one or several model graphs. The new matching algorithm NA works on the network and uses its compactness in order to speed up the detection process. Next, the problem of inexact graph matching is described and a distance measure based on basic graph edit operations and subgraph isomorphism is defined. We propose an inexact network algorithm INA which determines the optimal distance between an input graph and a set of model graphs along with the corresponding subgraph isomorphism. In addition to INA, a new lookahead procedure is proposed. The lookahead procedure works simultaneously over a set of model graphs and efficiently limits the size of the search space. The advantages of the new methods are studied in a theoretical complexity analysis. Finally, some experimental results with randomly generated graphs and a traditional graph matching algorithm for comparision confirm the superiority of the new methods in practice. CR Categories and Subject Descriptor: F.2 Analysis of Algorithms and Problem Complexity; I.2.4 [Artificial Intelligence]: Knowledge representation formalisms and methods; I.2.8 [Artificial Intelligence]: Problem solving, Control Methods and Search; I.5 [Artificial Intelligence]: Pattern Recognition General Terms: Algorithms Additional Keywords: Graphs, subgraph isomorphism, RETE
Tyko Strassen
Abstract
Download
% This report describes a modal model theory for the Basic Logic of Proofs. For each of the systems P, PF, PU, PM, PFM and PUM appropriate modal models are defined; soundness and completeness are proved using the technique of canonical models. Moreover, some basic principles of the Basic Logic of Proofs, primarily concerning fixed points, are investigated. } Categories and Subject Descriptors: F.4.1 [Mathematical Logic]: Proof Theory; I.2.3 [Deduction and Theorem Proving]: Metatheory; I.2.4 [Knowledge Representation Formalisms and Methods]: Representation languages. General Terms: Canonical Models, Proof, Modality
Hansruedi Scheurer and Andreas Spichiger and Jens Grabowski
Abstract
Download
Das ETSI (European Telecommunications Standards Institute) arbeitet an europäischen Normen im Gebiet der Telekommunikation. Es ist in mehrere sogenannte Technische Komitees (Technical Committees) unterteilt, die sich jeweils auf ein bestimmtes Themengebiet konzen trieren. Das TC MTS (Methods for Testing and Specification) arbeitet im Gebiet des Konformitätstestens und der formalen Spezifikation von Kommunikationsprotokollen. Die Technischen Unterkomitees (Sub Technical Committees) MTS1 und MTS2 bearbeiten in ihren Sitzungen und zum Teil in Projektteams verschiedene Arbeitsthemen (Work Items). Im ersten Teil des Berichtes werden Organisation, Struktur und Arbeitsgebiet des ETSI vorgestellt. Im zweiten Teil wird der aktuelle Stand der Arbeiten in den verschiedenen Arbeitsthemen kurz zusam mengefasst. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.1 [Software Engineering]: Requirements / Specifications General Terms: Standardization
Andreas Ueltschi
Abstract
In diesem Bericht wird ein modellbasiertes System zur Objekterkennung und Lokalisierung vorgestellt\footnote{Diese Arbeit ist ein Teil des Projekts ''An intelligent multisensory robot vision system: planning of vision tasks and object recognition based on CAD--models'', das im Rahmen des {\it NFP 23} ''K"unstliche Intelligenz und Robotik'' durch den Schweizerischen Nationalfonds gef"ordert wird.}. Bearbeitet werden Tiefenbilder ({\em range images}), die explizite dreidimensionale Informationen enthalten. Dadurch wird vor allem die Lokalisation vereinfacht, und man erh"alt im allgemeinen genauere Ergebnisse. Das Ziel dieser Arbeit ist es, eine robuste Erkennung der Objekte zu erreichen, auch in Szenen, die mehrere Objekte enthalten. Das in diesem Bericht beschriebene System f"uhrt eine Merkmalsextraktion und die eigentliche Erkennung durch. F"ur die Segmentierung der einzelnen Szenen in Regionen gelangen Algorithmen zur Anwendung, die an unserem Institut bereits fr"uher implementiert worden sind. Das Hauptgewicht dieser Arbeit liegt in der speziellen Clusterbildung und der Ausn"utzung der topologischen Eigenschaften geometrischer Objekte. Zur Darstellung der Szenen und der CAD--Modelle wird eine attributierte Graphstruktur verwendet, der sogenannte Fl"achen--Nachbarschaftsgraph. Die Erkennung selber basiert auf einer Subgraphisomorphiebestimmung, die als Hypothesen--Verifikationssystem implementiert ist. Dank Ausn"utzung der speziellen Eigenschaften geometrischer Objekte wird eine Isomorphiesuche mit linearem Aufwand erreicht. In dieser Arbeit werden Polyederszenen mit verschiedenen Werkst"ucken verwendet. Diese Szenen setzen sich aus mehreren Einzelobjekten (bis zu 6 Objekten pro Szene) zusammen. Dadurch wird ein hoher Grad an Verdeckung und Schattenw"urfen erreicht. Verschiedene Beispielszenen werden am Ende der Arbeit ausf"uhrlich diskutiert. CR Categories and Subject Descriptors: I.4.6 [Image Processing]: Segmentation; I.4.8 [Image Processing]: Scene Analysis General Terms: Feature extraction, object recognition, object localization Additional Key Words: Range data, clustering, subgraph isomorphism
X. Y. Jiang and H. Bunke
Abstract
Download
In this paper we present a multisensory vision system that is intended to support the vision requirements of an intelligent robot system. Contrary to many other vision systems, our system has two significant new features. First, it contains multiple sensors, object representations, image analysis and interpretation methods in order to solve a number of different vision tasks. Secondly, it comprises a vision planner. Upon a task-level vision request from the robot system, the vision planner transforms it into appropriate sequences of concrete vision operations, executes these operations, and if necessary, finds out alternative strategies. Experimental results demonstrate the clear advantage of this combination of multiple resources with the vision planner in solving typical vision problems for robotic tasks
Roland Robmann
Abstract
Download
In diesem Bericht wird ein kantenorientiertes Verfahren beschrieben, um Grauwert-- und Tiefenbilder von komplexen Polyederszenen zwecks einer Verbesserung der Merkmalsextraktion zu integrieren. Ein Tiefensensor, der nach dem Prinzip des codierten Lichtansatzes arbeitet, generiert von einer Szene jeweils ein Grauwert-- sowie ein Tiefenbild, welche sich in den Bildpunkten entsprechen (korrespondieren). Aus diesen beiden Bildern werden unabhaengig voneinander Kanten detektiert, zerlegt und klassifiziert. In einem zweistufigen Integrationsverfahren werden die Kanten aus dem Grauwertbild durch die Kanten aus dem Tiefenbild ergaenzt (ergaenzende Fusion). An einer Sammlung von rund zwanzig Szenen wurde die Robustheit dieses Verfahrens getestet. Klassifikation: CR Categories and Subject Descriptors: I.4.6 [Image Processing]: Segmentation; I.4.8 [Image Processing]: Scene Analysis. General Terms: Algorithms. Additional Key Words: Range data, sensor fusion, edge and feature detection.
Jens Grabowski and Dieter Hogrefe and Robert Nahm and Andreas Spichiger
Abstract
Download
The problems of current theoretical foundations of testing are its constraint to Finite State Machines (FSMs) and its inability to be related to real black box testing. In this paper we give a theoretical foundation of practical testing. This foundation also implies a test methodology. A test generation tool which is based on this methodology will be presented at the end. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.2 [Computer-Communication Networks]: Network Protocols; D.2.5 [Software Engineering: Testing and Debugging] General Terms: Verification, Theory, Standardization Additional Key Words: Test Generation
Robert Nahm and Jens Grabowski and Dieter Hogrefe
Abstract
Download
The goal of testing is to make statements about the relation between the traces of an implementation and a temporal property. This is not possible for all temporal properties. Within this paper safety and guarantee properties are identified to be testable temporal properties and for these testable properties a test case definition is given. This is done by representing a safety property as a labeled transition system and by representing the guarantee property as a finite automaton. The test case definition is applied to practical testing by using SDL descriptions to specify safety properties and by using Message Sequence Charts (MSCs) to specify guarantee properties. CR Categories and Subject Descriptors: C.2.0 [Computer Communication Networks]: General; C.2.2 [Computer Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging
Andreas Spichiger
Abstract
Download
Dieser Bericht soll die moeglichen Auswirkungen von Testspezifikationen, die im ETSI erstellt werden, auf die Entwicklung und den Verkauf von Produkten aufzeigen. In einem weiteren Abschnitt wird dann dargestellt, nach welchen Kriterien man eine Testspezifikation (z.B. in einer Public Enquiry des ETSI) beurteilen koennte bzw. was dazu fuer Kenntnisse notwendig sind. Der letzte Abschnitt gibt dann eine kurze Uebersicht ueber die zu erwartende Entwicklung bei den Testspezifikationen. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; K.4.1 [Computers and Society]: Public Policy Issues; K.5.2 [Legal Aspects of Computing]: Governmental Issues General Terms: Standardization, Legal Aspects Additional Key Words: Test Specification, Regulation
Jacques E. Boillat
Abstract
Download
We compare two load balancing techniques for Cayley graphs based on information and load exchange between neighboring vertices. In the first scheme, called natural diffusion, each vertex gives (or receives) a fixed part of the load difference to (from) its direct neighbors. In the second scheme, called Cayley diffusion, each vertex successively gives (or receives) a part of the load difference to (or from) direct neighbors incident to the edges labeled by the elements of the generator set of the Cayley graph. We prove that the convergence of the Cayley diffusion is faster than the natural diffusion, at least for some particular graphs (cube, circuit with an even number of vertices, graphs from the symmetric group). Furthermore we compute the fastest possible way to distribute load in a circuit using local load balancing strategies. {\bf Topics covered.} Theory of Parallel and Distributed Computation, Parallel Algorithms, Load Balancing, Cayley Graphs, Complexity. {\bf MSC Mathematics Subject Classification.} 68R10 [Computer Science]: Discrete Mathematics in relation to Computer Science, Graph Theory. 05C25 [Combinatorics]: Graph Theory, Graphs and Groups. 05C25 [Combinatorics]: Graph Theory, Applications.
Jens Grabowski and Dieter Hogrefe and Robert Nahm
Abstract
Download
Within this paper a method for the generation of test cases for conformance tests is presented. The method is based on a formal specification written in CCITT SDL and on Message Sequence Charts (MSCs). It assumes that the purpose of a test case is given by at least one MSC. Although SDL was chosen as formal description technique and MSCs were chosen to express test purposes, in principle, the presented method should work for any formal specification which can be represented as labeled transition system and for any test purpose which can be described by a finite automaton. CR Categories and Subject Descriptors: C.2.0 [Computer Communication Networks]: General; C.2.2 [Computer Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging
Jens Grabowski and Hansruedi Scheurer and Andreas Spichiger and Stefan Suter
Abstract
Download
Die vorliegende Fallstudie beschäftigt sich mit der automatischen Generierung von Testfällen aus SDL Spezifikationen. Basierend auf dem Inres Beispiel wird gezeigt, wie sich mit Hilfe des Software Programms TESDL Testfälle generieren lassen. TESDL generiert TTCN Verhaltensbäume für das beobachtbare (externe) Verhalten einer SDL Spezifikation durch dessen Simulation. Dabei wird jeder mögliche Simulationsweg bis zu einer vorher definierten Baumtiefe berücksichtigt. Unterschiedliche Simulationswege können jedoch das gleiche beobachtbare Verhalten haben. Bei einem Black Box Test sind unterschiedliche Systemabläufe mit dem gleichen beobachtbaren Verhalten nicht unterscheidbar. Bezogen auf das Testen können die mit TESDL generierten Verhaltensbäume Redundanz beinhalten. In dieser Fallstudie wird beschrieben, wie Testfälle für die unterschiedlichen Kommunikationsphasen des Inres Protokolls mit TESDL generiert werden können. Ferner werden die Einflüsse von unterschiedlichen Testkonfigurationen auf die Testfallgenerierung und die Resultate unserer Experimente mit TESDL beschrieben. Desweiteren beinhaltet diese Fallstudie eine Beschreibung, wie die erwähnte Redundanz aus den generierten Verhaltensbäumen entfernt werden, und einen Vergleich zwischen den ursprünglichen und den redundanzfreien TTCN Verhaltensbäumen. Englisch: This case study deals with the automatic generation of test cases from SDL specifications. Based on the Inres example it explains how the tool TESDL can be utilised for generating test cases. TESDL generates a TTCN behaviour tree of the observable behaviour of an SDL specification by simulation, during which the observable behaviour of every possible simulation path up to a certain depth is reported. Unfortunately, different simulation paths may show the same observable behaviour. A black box test cannot distinguish between different system executions with the same observable behaviour and therefore, TTCN behaviour trees generated by TESDL may include redundancies. In this case study it is shown how test cases for the different communication phases of the Inres protocol can be generated using TESDL. Furthermore, the influences of different test configurations on the test case generation and the results of our TESDL experiments are described. Additionally we have eliminated the redundancies of the TESDL outputs. How this is done, together with a comparison of the original and the redundance free TTCN behaviour trees can be found in the last section of this case study. CR Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General; C.2.0 [Computer-Communication Networks]: Network Protocols; D.2.5 [Software Engineering]: Testing and Debugging
Thomas Strahm
Abstract
Download
Systems based on theories with partial self-application are relevant to the formalization of constructive mathematics and as a logical basis for functional programming languages. In the literature, they are either presented in the form of partial combinatory logic or the partial lambda calculus, and sometimes these two approaches are erroneously considered to be equivalent. In this paper we address some defects in the partial lambda calculus as a constructive framework for partial functions. In particular, the partial lambda calculus is not embeddable into partial combinatory logic and it lacks the standard recursion-theoretic model. The main reason is a concept of substitution, which is not consistent with a strongly intensional point of view. We design a weakening of the partial lambda calculus, which can be embedded into partial combinatory logic. As a consequence, the natural numbers with partial recursive function application are a model of our system. The novel point will be the use of explicit substitutions, which have previously been studied in the literature in connection with the implementation of functional programming languages. CR Categories and Subject Descriptors: F.4.1 [Mathematical Logic]: Lambda calculus and related systems Computational logic Proof theory General Terms: Explicit mathematics Logic of partial terms Partial lambda calculus Partial combinatory logic Explicit substitutions
Xavier Fabregas and Felix Grimm
Abstract
Dieser Bericht beschreibt einen Prototyp eines Expertensystems fuer die Diagnose von Schilddruesenerkrankungen, der im Rahmen eines COST-B2 Projekts entwickelt wurde. Das System umfasst mehrere Komponenten, von denen die wichtigsten naeher erlaeutert werden: Die Wissensbasis enthaelt das medizinische Wissen und eine zweistufige Strategie zur Diagnosefindung. Eine komfortable Dialogkomponente erlaubt es dem Benutzer (Arzt), in verschiedenen Windows (die sich eng an das bisherige Formular anlehnen) die Symptome und Befunde einzugeben. Am Anfang einer Konsultation sorgt der Datenbankanschluss dafuer, dass allfaellig vorhandene Angaben ueber den Patienten aus der Patientendatenbank uebernommen werden. Am Schluss einer Konsultation werden die wichtigsten Daten in der Datenbank abgelegt. Ferner zeigt die Resultatausgabekomponente die relevanten Resultate am Bildschirm an und generiert einen schriftlichen Rapport (Anamneseblatt, Diagnose) fuer die Archivierung. Neben dem eigentlichen Prototyp wird ein Teilsystem zur Bestimmung der Schilddruesenfunktion vorgestellt, in dem eine Methode zum Umgang mit unsicherem und unscharfem Wissen (Dempster-Shafer Theorie) zu Testzwecken verwendet wird. Es werden die Erfahrungen mit beiden Systemen angegeben und die aus diesen Erkenntnissen resultierenden Schlussfolgerungen fuer die weiteren Entwicklungsarbeiten ausgefuehrt. 4. Klassifikation: I.2.1 [Artificial Intelligence]: Applications and Expert Systems, Medicine and science; I.2.3 [Artificial Intelligence]: Deduction and Theorem Proving, Uncertainty, "fuzzy", and probabilistic reasoning. General Terms: Knowledge Processing Additional Key Words: Dempster-Shafer Theory, Fuzzy Set Theory, Abductive Logic.
Dieter Hogrefe
Abstract
This paper presents a method which shows the use MSCs for test case design. The method has to be understood as beeing semiformal. The system to be tested is assumed to be specified formally by SDL. In principle the method should work for any FDT as long as it can be represented by an LTS. The method assists in selecting those test cases from the usually very large amount which are considered important by the test case designer on an intuitive basis. A number of issues remain to be unsolved and for further study as indicated. Beyond that, a tool based on this method is under development.
Stepan Weber and Andreas Greulich and Dieter Hogrefe
Abstract
Download
This paper presents a knowledge-based approach to the design of leased line networks. The tool OPTOLL assists in the optimization of networks by applying in a successive way a number of rules based on heuristics. The optimization so far is driven by the costs of the network, but other optimization criteria can be implemented in an easy way by modifying the knowledge base. The optimization starts with an initial configuration consisting of a primitive solution for the network: point to point lines with no routing. Starting from this the tool applies so called Expansion and Contraction operations according to heuristics like prevention of long lines or search for backbone lines. The approach is demonstrated by an example where the overall network costs are reduced considerably with respect to the primitive solution. CR Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks] Network Architechture and Design General Terms: Algorithms, Design Additional Key Words: Corporate Networks, Artificial Intelligence, Expert Systems
Sergei Artemov and Tyko Strassen
Abstract
Download
This report describes the elimination of the injectivity restriction for functional arithmetical interpretations as used in the systems PF and PFM in the Basic Logic of Proofs. An appropriate axiom system PU in a language with operators "x is a proof of y" is defined and proved to be sound and complete with respect to all arithmetical interpretations based on functional proof predicates. Unification plays a major role in the formulation of the new axioms. } Categories and Subject Descriptors: F.4.1 [Mathematical Logic]: Proof Theory; I.2.3 [Deduction and Theorem Proving]: Metatheory; I.2.4 [Knowledge Representation Formalisms and Methods]: Representation languages. General Terms: Unification, Proof, Provability, Modality.
Karl Guggisberg and Peter G. Kropf
Abstract
Download
In der vorliegenden Arbeit wird das Spezifikationswerkzeug SystemSpecs anhand der Beschreibung einer Verkehrssimulation und die Echtzeit-Implementation auf einem Transputernetzwerk dargestellt. Mit wachsender Verkehrsdichte in den Staedten und dem Beduerfnis nach einem fluessigen, die Umwelt schonenden Verkehrsablauf, steigen die Anforderungen an Lichtsignalanlagen. Die implementierte Verkehrssimulation verfolgt wichtige Parameter einer Verkehrsanlage ueber die Zeit. Mit SystemSpecs wird ein modernes Werkzeug fuer den Entwurf von gefaerbten Petri-Netzen (PrT-Netze) fuer die Implementation benutzt. Besonderes Gewicht wird auf einen modularen Netzentwurf gelegt. Durch SystemSpecs automatisch generierter occam-Code laeuft auf einem Transputernetzwerk in Echtzeit ab. Classification CR Categories and Subject Descriptors : D.1.7, D.2.2, I.6.8, J.3 General Terms : Specification Tools, Petri Nets, Traffic Simulation, Parallel Computing
X. Y. Jiang and H. Bunke
Abstract
Download
Three-dimensional (3-D) methods in computer vision have been gaining increasing importance. Among the various range acquisition techniques, active triangulation is certainly the most common approach. In practice, the relationship between the geometry of an active triangulation system and the accuracy in obtaining three-dimensional position information is of great importance. In this paper we consider the error in position estimation due to the finite resolution of the imaging device. A point's measured position in the image plane maps to a small area in 3-D space. With the assumption that the point's true position is uniformly distributed inside this area, closed-form expressions for the probability distribution of error and the average error along each coordinate direction (horizontal, vertical, and range) are derived. Following this, the errors in the three coordinate directions are compared to each other. The probability that the error along one coordinate direction dominates that along another coordinate direction is inferred. The results presented in this paper have potential applications to range sensor configuration and error modeling in vision tasks using range data acquired by active triangulation.
In this report we describe a self-organizing map to solve the mapping problem in a multiprocessor system with a distributed memory architecture. Experimental results for 2-dim lattice based processor networks are presented and the generalization of this approach to arbitrary processor topologies is discussed. CR Categories and Subject Descriptors: D.4.1 [Process Management}: Concurrency; G.2.2 [Discrete Mathematics]: Graph algorithms; I.2.6 [Artificial Intelligence]: Connectionism and neural nets. General Terms: Algorithm Additional Key Words: Self-Organizing feature map, multiprocessor systems, process mapping, mapping problem
Jacques E. Boillat and Peter G. Kropf and Peter Schwab
Abstract
In diesem Artikel wird ein Werkzeug fuer die Spezifikation paralleler Systeme vorgestellt, das auf effiziente und elegante Art zur Implementation paralleler Programme fuehrt. Es werden die wichtigsten Elemente der verwendeten Petri-Netze und die entsprechenden Konzepte der parallelen Programmierung aufgezeigt. Der Einsatz von SystemSpecs, in welchem textuelle und graphische Methoden (fuer die Entwicklung und Animation der Applikationen) verwendet werden, ist Anhand einer Anwendung aus dem Bereich der Medizin, einer Anaesthesie-Simulation aufgezeigt. CR Categories and Subject Descriptors: D.1.7, D.2.2, I.6.8, J.3 General Terms: Specification Tools, Petri Nets, Simulation, Parallel Computing, Medical Applications }
Jens Grabowski and Ekkart Rudolph
Abstract
Download
Message Sequence Charts (MSCs) are a widespread means for description and graphical visualisation of selected system runs within distributed systems, especially telecommunication systems. Various kinds of MSCs with similar expressive power are used frequently within industry and standardisation bodies. Therefore, the CCITT (Comité Consultatif International Télégraphique et Téléphonique) attempts to harmonise their use by means of the new standard language Message Sequence Chart (MSC) in 1992 [Z120]. This paper presents a motivation for the MSC standardisation. The history of the standardisation process is briefly sketched. The MSC language is introduced. Some language constructs which may need further elaboration are pointed out and possible enhancements are proposed. Finally, an approach towards the definition of a clear MSC semantics is described. Keywords: CCITT, Formal Description Technique (FDT), Distributed System, Message Sequence Chart (MSC), SDL, Trace Language
Thien Ha Minh and Horst Bunke
Abstract
Download
Check forms are used by many people in daily life for money remittance. Surprisingly, the processing of these forms at banks and post offices is only partly automated. In this report, we focus on a particular kind of form, viz., the GIRO checks used in Switzerland. We will describe a fully automatic system which is able to recognise the following items on a GIRO check: the financial institution, the name and address of the receiver and the account number. The complete analysis and understanding of a GIRO check is performed in two phases. In the first phase, the system performs a layout analysis in order to localise regions corresponding to various items on the check. The input gray-level image is first binarised and segmented using the X-Y-tree decomposition algorithm. This gives us the atomic parts of a check, for example, individual characters, special symbols, and formatting lines which are present on a form. Each entity is then interpreted as a part of an item (e.g., receiver's name, account number), according to the knowledge about possible layouts of a form. All atomic entities belonging to the same item are grouped together and yield the location of that item. In the second phase, the localised items are separately analysed by a local binarisation and the binarised sub-images are submitted to an OCR engine to obtain the streams of characters. We have tested the system on a large number of checks and the results are promising in terms of both computation time and recognition accuracy. CR Categories and Subject Descriptors: I.4.6 [Image Processing]: Segmentation; I.4.7 [Image Processing]: Feature Measurement; I.5.1 [Pattern Recognition]: Models. Additional Key Words: GIRO check, Optical Character Recognition (OCR), document image recognition, graph matching.
René M. Rehmann
Abstract
Download
In this report we describe a communication package which is designed to support regular process topologies, e.g. master-slave models, on a MPP system. The package allows to address a neighboring process with a relative address and includes functions like broadcasting and combine. The communication package has been implemented as C preprocessor macro package using ANSI-C syntax to preserve efficiency and portability among compilers.
K. M. Decker and R. M. Rehmann
Abstract
Download
Many problems in computational science can be mapped very efficiently to parallel distributed systems like loosely--coupled workstation clusters or more closely--coupled multicomputer systems. This paper proposes a methodology of parallel distributed programming and discusses its realization by means of a programming environment to overcome the notoriously difficult programming of distributed systems. The environment centers around a knowledge based system offering commonly used algorithmic skeletons together with extensive interactive guidance from the early design to the implementation phase and selection methods for skeletons. This programming assistant in the core is supplemented with three different interfaces for high--level design languages and a graphical user interface allowing the graphical representation of these languages. An interface to extend the knowledge base is also included. The investigation is part of the SPADE project which aims at the development of an integrated program and application development environment for problems in computational science on parallel architectures with distributed memory.
Sergei Artemov and Tyko Strassen
Abstract
Download
Propositional Provability Logic was axiomatized in \cite{solovay76}. This logic describes the behaviour of the arithmetical operator ``$y$ is provable''. The aim of the current paper is to present The Logic of Proofs, which provides propositional axiomatizations of the predicate "$x$ is a proof of $y$" by means of modal logic, with the intention of meeting some of the needs of computer science.
Horst Bunke
Jacques E. Boillat
Abstract
Download
We compare two load balancing techniques for Cayley graphs based on information and load exchange between neighboring vertices. In the first scheme, called natural diffusion, each vertex gives (or receives) a fixed part of the load difference to (from) its direct neighbors. In the second scheme, called Cayley diffusion, each vertex successively gives (or receives) a part of the load difference to (or from) direct neighbors incident to the edges labeled by the elements of the generator set of the Cayley graph. We prove that the convergence of the Cayley diffusion is faster than the natural diffusion, at least for some particular graphs (cube, circuit with an even number of vertices, graphs from the symmetric group). Furthermore, we show that the number of communications required in the Cayley diffusion is smaller than that of the natural diffusion. Topics covered. Theory of Parallel and Distributed Computation, Parallel Algorithms, Load Balancing, Cayley Graphs, Complexity. MSC Mathematics Subject Classification. 68R10 [Computer Science]: Discrete Mathematics in relation to Computer Science, Graph Theory. 05C25 [Combinatorics]: Graph Theory, Graphs and Groups. 05C25 [Combinatorics]: Graph Theory, Applications.
Dieter Hogrefe and Stefan Leue
Abstract
Download
Real-time constraints play an important role in the specification of communication protocols. We present three practical real-time specification methods, SDL, real-time extended LOTOS and Message Sequence Charts in combination with real-time temporal logic, and exemplify their application to the specification of real-time aspects in communication protocols. We discuss their suitability for different types of real-time constraints and observe that different specification styles are suitable for different application areas.
Horst Bunke and T. Glauser
Abstract
The recognition of polygons in 3-D space is an important task in robot vision. In this paper two particular problems are addressed. First we propose a new set of local shape descriptors for polygons which are invariant under affine transformation, i.e. translation, scaling, rotation and projection from 3-D to any 2-D plane. Furthermore, they are complete in the sense that they allow the reconstruction of any polygon in 3-D space from three consecutive vertices. The second problem discussed in this paper is the recognition of 2-D polygonal objects under affine transformation and the presence of partial occlusion. We introduce a recognition procedure that is based on the matching of edge length ratios, using a simplified version of the standard dynamic programming procedure commonly employed for string matching. The matching procedure preserves the invariance properties of the local shape descriptors and can cope with partial occlusions. Our algorithm is conceptually very simple, easy to implement and has a low computational complexity. It will be shown in a set of experiments that the method is reliable and robust. CR Categories and Subject Descriptors: I.4.7[Image Processing]: Invariants, Projections,Shape; I.5.4[Pattern Recognition]: Computer Vision; General Terms: Shape Representation, Algorithms Additional Key-Words: Dynamic Programming
Peter B. Ladkin and Stefan Leue
Abstract
Download
Message (or Time) Sequence Charts (MSCs) are used in telecommunications system specification. To investigate the meaning of an MSC specification, we found the need to connect MSC specifications with more precise methods such as temporal logic and Büchi automata. Based on an interpretation of a collection of MSCs as a global state automaton, we provide an explicit semantic connection to temporal logic, enabling properties expressed by temporal logic to be added to MSC specifications. Finally, we determine the expressiveness of MSCs by simulating an arbitrary Büchi automaton by an MSC specification.
Peter B. Ladkin and Stefan Leue
Abstract
Download
We give a precise semantics to Message Sequence Charts (MSCs), by interpreting MSC specifications by Büchi automata. The state transition graph is uniquely determined by the specification, but different automata may be defined to accept different sets of traces allowed by the specification. The precise automaton chosen therefore depends on reliability properties of communications assumed by the specification.
Tim Fernando
Abstract
The ``absoluteness" of programs, conceived as syntactic objects, is contrasted with the ``open ended-ness" of processes, understood as semantic interpretations of programs, relative to a language of transition predicates. While programs can be coded numerically, processes call for more elaborate set-theoretic coding; accordingly, notions of computation on sets are investigated for processes, as programs modulo bisimilarity. In particular, bisimilarity for recursive transition systems is shown not to be hyperarithmetic.
Peter G. Kropf
Abstract
Monte Carlo simulation algorithms are frequently used for lattice gauge theory. These problems belong to the Grand Challenge Problems, because they require enormous computing power. It is thus obvious to make use of parallel computers for such algorithms. First an algorithm of the Metropolis type for lattice gauge theory problems is sketched briefly. The emphasis in this paper lies in the presentation of a new algorithmic parallelization of this algorithm. By analyzing the calculations and the causal dependencies between them, a parallelization is achieved with almost ideal speedup results. These complexity results are derived theoretically and verified on a Transputer network implementation. CR Categories and Subject Descriptors : D.1.3, D.4.8, F.2, G.1.0, I.1.8, J.2 General Terms : Parallel Algorithms, Monte Carlo Simulation, Complexity Analysis
Peter G. Kropf
Abstract
Download
The past decade has seen the emergence of many multi-processor machines. Research prototypes have been developed already in the 60's and 70's but only recently there are also parallel computers which are commercially available. This paper gives an overview of parallel systems which are currently installed in Switzerland. Although the list may not be complete, it shows that many institutions have their own parallel systems. The many projects in progress at these sites are not presented here. The paper is organised as follows: first the systems installed at universities and EPF/ETH are given in alphabetic order, followed by some example installations in industry and at other schools. It has to be noted, that in many places single or very small Transputer systems have been purchased, which are not listed here as well as networked workstations used for distributed computing. A short summary shows afterwards installations of classical (vector-) supercomputers. To get an idea of installations in other countries, some example installations in the Swiss neighborhood are listed.
Christoph Streit
Abstract
Das Thema der vorliegenden Arbeit ist das Modellieren von geometrischen Objekten mit Hilfe von Lindenmayer-Systemen (L-Systeme). Es wird ein Modelliersystem vorgestellt, das die Bearbeitung von komplexen L-Systemen erlaubt. Durch die Unterstützung von tabellen-orientierten L-Systemen wird eine Modularisierung möglich, so dass die Regeln in einer übersichtlichen Art entwickelt werden können. Die im Bericht enthaltenen Bilder zeigen eindrucksvoll, dass mit dem gewählten Ansatz ohne grossen Aufwand realistische dreidimensionale Objekte erzeugt werden können. Mit Hilfer der Methode der L-Systeme ist es möglich, nicht nur eine `ausgewachsene' Pflanze zu gestalten, sondern auch deren Wachstum, einschliesslich dem Wachstum der Blüten und Blätter. Obwohl ein Hauptgewicht auf die Erzeugung von pflanzenähnlichen Gebilden gelegt wird, zeigt sich, dass auch numerische Anwendungen mit Hilfe der L-Systeme auf eine elegante Art bearbeitet werden können. Die Objekte werden nach folgendem Schema modelliert : o Ein Symbolstring s wird mit Hilfe eines L-Systems erzeugt. o s enthält Kommandos, die eine Turtle im dreidimensionalen Raum manövrieren. Die Turtle kann Linien mit verschiedenen Dicken und Farben sowie Polygone, Kugeln und bikubische B\'ezier-Flächen zeichnen. o Die erzeugten Primitive werden mit Hilfe eines Raytracers schattiert, wodurch eine realistische Bildwirkung entsteht. Als Beispiele sind im Text einige Objekte mit den zugehörigen L-Systemen abgebildet. CR Categories and Subject Descriptors: F.4.2 [Mathematical Logic and Formal Languages]: Grammars and Other Rewriting Systems; I.3.6 [Computer Graphics]: Computational Geometry and Object Modelling; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. General Terms: L-Systems Additional Keywords: realistic image synthesis, turtle geometry Title: A Modelling Environment for Lindenmayer-Systems
Horst Bunke and J. Csirik
Abstract
Download
String matching is a useful concept in pattern recognition that is constantly receiving attention from both theoretical and practical points of view. In this paper we propose a generalized version of the string matching algorithm by Wagner and Fischer. It is based on a parametrization of the edit cost. We assume constant cost for any delete and insert operation, but the cost for replacing a symbol is given as a parameter $r$. For any two given strings $A$ and $B$, our algorithm computes the edit distance of $A$ and $B$ in terms of the parameter $r$. We give the new algorithm and study some of its properties. Its time complexity is $O (n^2\cdot m)$, where $n$ and $m$ are the lengths of the two strings to be compared and $n\leq m$. We also discuss potential applications of the new string distance to pattern recognition. Finally, we present some experimental results. I.5 (pattern recognition) I.7 (text processing)
Xiaoyi Jiang and Horst Bunke
Abstract
Download
In this paper we present a novel technique for rapidly partitioning surfaces in range images into planar patches. Essential for our segmentation method is the observation that, in a scan line, the points belonging to a planar surface form a straight line segment. On the other hand, all points on a straight line segment surely belong to the same planar surface. Based on this observation, we first divide each scan line into straight line segments and subsequently consider only the set of line segments of all scan lines as segmentation primitives. We have developed a simple link-based data structure to efficiently represent line segments and their neighborhood relationship. The principle of our segmentation method is region growing. Three neighboring line segments satisfying an optimality criterion are selected as a seed region, and then a growing is carried out around the seed region. We use a noise variance estimation to automatically set some thresholds so that the algorithm can adapt to the noise conditions of different range images. The proposed algorithm has been tested on real range images acquired by two different range sensors. Experimental results show that the proposed algorithm is fast and robust. CR Categories and Subject Descriptors: I.4.6 [Image Processing]: Segmentation; I.4.8 [Image Processing]: Scene Analysis. General Terms: Algorithms. Additional Key Words: Range data, partitioning, region growing, planar surfaces.
A. Aeschlimann and J. Schmid
Abstract
Download
An algorithm is presented to draw Hasse-diagrams of partially ordered sets (posets). It uses two heuristic principles to generate `good' pictures for a wide range of posets. These two principles are: (i) The total length of all edges of the diagram should be small (with the vertices kept at a minimal distance) and (ii) the vertices are constrained to coincide with the grid points of a given rectangular planar grid. The benefits are quite straightforward since (i) using less ink means less confusion and (ii) the restriction to grid points tends to keep the number of different slopes small. Since the program was conceived as a readily usable tool (with the emphasis on results rather than on perfection), we are well aware of the fact that it will lend itself easily to improvements in many aspects. Mathematics Subject Classification (MSC, Zentralblatt): 06-04 (Ordered Sets: Explicit machine computation and programs), 06A06 (Ordered Sets: Partial order, general) 90C35 (Mathematical Programming: Programming involving graph or networks)
October 10-11, 1991 the third annual meeting of the Swiss Group for Artificial Intelligence and Cognitive Science was held in Biel/Bienne (Switzerland). This report contains the full version of all papers presented at this meeting (except two). Contributions Improving a grammar checker with semantic level Blache, P. & Scaglione, M. Conceptual shifts in KB systems for teaching and learning Bonzon, P. Discriminating agents for visual indexing Bost, J.-M. Figure-ground separation: evidence for asynchronous processing in visual perception? Burgi, P.-Y. Can AI help the physician? Burke, P., Appel, R.D., Funk,M.,Hochstrasser,D.F. & Scherrer, J.-R. Towards a theory of knowledge Dessimoz, J.-D. A functional framework for the specification of artificial neural networks deFrancesco, M. & Pellegrini, C. Banker's assistant - an intelligent advisory system Huber, A. Detection of salient features for focus of attention Milanese, R. Vamp - a toolbox for building configuration systems Peter, E., Rubattel, C. & Schild, G. Rule-based recognition of 3-D objects in three views Peter, M. & Rutishauser, M. Taking connectionism seriously: the vague promise of subsymbolism Verschure, P.F.M.J. Using chaos to speed up processing in neural networks: Chaos Based Learning. Verschure, P.F.M.J.
Robert Esser and Bruno Buetler and Jacques E. Boillat and Peter G. Kropf and Andre Murbach
Abstract
SPECS supports parallel programming at very high level, i.e. at the specification level. Using the formal method High Level Petri Nets the user is able to describe parallelism graphically. This formal method allows tools to be implemented to verify user created nets. SPECS supports the animation of these nets for testing and debugging and a fast simulator enables specifications written with SPECS to be executed in real time on a multiprocessor based computer. This paper discusses the SPECS tool, the ramifications of using a specification method for programming parallel computers, the design decisions made in implementing the simulator and the hardware designed and built. As an example of the use of SPECS a medical application an Anaesthesia Simulator will be described.
Urs-Martin Künzi
Abstract
Download
Meyer and Ritchie gave a description of primitive recursive functions by loop-programs. In this paper a class of logic-programs is described which computes the primitive-recursive sets on Herbrand-universes. Furthermore, an internal description of primitive-recursive functions and sets on Herbrand-universes is given.
Jacques E. Boillat
Download
Hanspeter Bieri and Igor Metz
Abstract
Download
Generalized digital images, subsequently called hyperimages, represent a variation of the conventional digital images which implies pixels of different dimensions within the same image. The extent of a hyperimage is the disjoint union of all pixel extents it contains, which are relatively open unit cubes with respect to the euclidean topology of the underlying space. This approach is independent of any specific dimension of image and space, respectively, and allows strict partitioning of images into subimages, not just subdividing. Since the storage required by a $d$-dimensional hyperimage of resolution $n^d$ is $\approx 2^{d}n^{d}$ when using a binary matrix representation, a more space efficient bintree representation is investigated. Algorithms for the Boolean operations, the computation of elementary topological properties and the computation of some important measures of $d$-dimensional hyperimages (volume, surface, Euler characteristic) are presented. Because of the nature of bintrees, the implementation of these algorithms, too, can be performed independently of any specific dimension of image and space.