Publications

Below publications in the context of plugIT are listed. Please click on the screenshot of the paper to open and download the full version.

2011/10 IMMM 2011

Axel Tenschert, Pierre Gilet, "Reasoning on High Performance Computing Resources"

Title: Reasoning on High Performance Computing Resources

Abstract: The emergent growing amount of available information and data in the last decade has led to very large data stocks that express specific knowledge. This knowledge can be stored in ontologies. Reasoning strategies are then required to deal with a huge amount of data even if the allotted time frame to perform this task is restricted. This work covers the research issue of performing a time-wise restricted reasoning by means of ontologies. The presented approach is suitable for processing a reasoning strategy on high performance computing resources by considering a short time window and offering a solution for a quick allocation of required resources.

2011/10 ER 2011

H. Kondylakis and D. Plexousakis “Ontology Evolution in Data Integration: Query Rewriting to the Rescue”

Title: Ontology Evolution in Data Integration: Query Rewriting to the Rescue

Abstract: The evolution of ontologies is an undisputed necessity in ontology-based data integration. In such systems ontologies are used as global schema in order to formulate queries that are answered by the data integration systems. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. In most of these systems, when ontologies change their relations with the data sources, i.e., the mappings, are recreated manually, a process which is known to be error-prone and time-consuming. In this paper, we provide a solution that allows query answering under evolving ontologies without mapping redefinition. To achieve that, query rewriting techniques are exploited in order to produce equivalent rewritings among ontology versions. Whenever equivalent rewritings cannot be produced we a) guide query redefinition or b) provide the best “over-approximations”. We show that our approach can greatly reduce human effort spent since continuous mapping redefinition on evolving ontologies is no longer necessary.

2011/10 eChallenges 2011

P. Gilet, A. Tenschert, R. Kübert, “Use of Graphical Modelling, Semantics and Service Level Agreements for High Performance Computing”

Title: Use of Graphical Modelling, Semantics and Service Level Agreements for High Performance Computing

Abstract: The work described in this paper targets the improvement of the process leading to the creation of Service Level Agreements (SLAs) for High Performance Computing (HPC). This improvement is achieved by means of semantic processing. Additionally, the novel HPC-WSAG schema is used to create SLAs addressing HPC-specific requirements. The scenario of using semantic enrichment to request computing time and resources is described through a specific use case of the plugIT project. The presented approach is applicable to HPC centres in general and makes use of a new SLA specification and an enhanced SLA management approach by considering semantic matching.

2011/10 BIR 2011

D. Karagiannis and N. Visic, “Next Generation of Modelling Platforms”

Title: Next Generation of Modelling Platforms

Abstract: Future enterprise systems require an elaborate conceptual foundation that promotes a tight mutual alignment between information systems and business to effectively support business operations and managerial decision-making. Thus a growing number of groups around the world show interest in modelling methods – either standard or individual ones – that satisfy the requirements of their domain and comply with the conceptual foundations. In order to analyze modelling methods in different domains, we introduce a generic modelling method specification framework that describes modelling methods on three major parts: (i) the modelling language that describes the syntax, semantics and notation, (ii) the modelling procedures that describe the methodology as well as (iii) algorithms and mechanisms that provide “functionality to use and evaluate” models described by a modelling language. Simultaneous to the raise of modelling methods there is a need for re-use, integration or combination of different modelling methods. The metamodelling approach is considered to provide the required concepts and mechanisms to combine different modelling methods for the so called Hybrid Method Engineering. The Next Generation Modelling Framework (NGMF) supports Hybrid Method Engineering, both on a conceptual and on a technical integration level. On conceptual level the NGMF provides mechanisms to encapsulate modelling methods and enable the hybrid use of different modelling languages. On technical level the Next Generation Modelling Platform provides functionality for modelling method engineering and modelling method application. The organizational framework is provided by the Open Model Initiative (OMI), which supports users in realizing and applying this approach based on three pillars: community, projects and foundations.

2011/09 I-Know 2011

Woitsch R., Hrgovcic V., Hrgovcic V., "Modelling Knowledge: An Open Models Approach”

Title: Modelling Knowledge: An Open Models Approach

Abstract: Processing knowledge is a prominent field of research and – after the knowledge management hype a decade ago – also in business. We observe two main trends, although not explicitly distinguishable. First the knowledge engineering approaches that focus on machine interpretable knowledge and second the knowledge management approaches that focus on human interpretable knowledge. It is proven that both approaches can be supported by models, for knowledge engineering more formalized knowledge expressions and for knowledge management also informal knowledge expressions. The Open Knowledge Model (OKM) project on Open Models aims to bring these two communities together by applying a Hybrid Modelling Method Engineering approach for knowledge models. A first prototype has been developed in the project plugIT. This paper will introduce the approach, the findings and provides and outlook on OKM.

2011/09 CEC 2011

Hrgovcic V., Woitsch R., Karagiannis D., "Hybrid Service Modelling in Enterprise Computing”

Title: Hybrid Service Modelling in Enterprise Computing

Abstract: The Internet evolved to a generic platform and became fully pervasive infrastructure providing services anywhere and anytime. Hence the assumption is that any service requested by any business process is already provided somewhere in the Internet. This requires that services are conceptualized to fit descriptions of business process but on the same time rely on their technical specification, hence the semantic distance between business processes and service technology must be bridged. “Service modeling” has been identified as a promising concept to enable such semantic bridging. Each phase of Service Life Cycles is well supported by a range of service models. The research challenge of this paper is to introduce a holistic modeling framework that enables the use of hybrid service models to bridge larger semantic distances than by using only individual service models. The paper discusses the motivation, challenges as well as the conceptual and technical approach before introducing prototypes as a proof of concept.

2011/07 COMPSAC 2011

W. Utz, R. Woitsch, D. Karagiannis, “Conceptualisation of Hybrid Service Models”

Title: Conceptualisation of Hybrid Service Models

Abstract: Engineering of services using a model-driven approach became a prominent field in the research and commercial world in recent years. Looking at Information and Communication Technologies (ICT) related publications, the term “Service modeling” has a multitude of perspectives and aspects. In order to make the results tangible a grouping of observation from literature has been performed allowing the identification of research challenges for a Holistic Service Modeling framework. As a realization proposal the Hybrid Modeling Approach for Services is introduced. The approach builds upon metamodelling concepts resulting in five axioms for combining modeling perspectives/aspects in an open manner.

2011/07 COMPSAC 2011

V. Hrgovcic, W. Utz, D. Karagiannis, “Service Modelling: A Model-Based Approach for Business and IT Alignment”

Title: Service Modelling: A Model-Based Approach for Business and IT Alignment

Abstract: Service modelling became important in the recent years, due to successful uptake of the SOA based environments and business models, pushed forward by overall growth of the service sector as a leading part of the worldwide economy. Service modelling gained importance based on the requirement to better understand, define and control the services as basic building blocks of the SOA based environments. This paper proposes a model driven approach on how the Business and IT alignment in the service life cycle can be supported by introducing knowledge technologies for service models. The model based approach is presented by first introducing the conceptual research challenges for integrated modelling methods, followed by the overview of the prototypic implementation of the service modelling platform and concludes with a demonstration in the HPC-based HLRS use case.

2011/02 SIGMOD 2011

H. Kondylakis, D. Plexousakis, “Exelixis: Evolving Ontology-Based Data Integration System”

Title: Exelixis: Evolving Ontology-Based Data Integration System

Abstract: The evolution of ontologies is an undisputed necessity in ontology-based data integration. Yet, few research efforts have focused on addressing the need to reflect ontology evolution onto the underlying data integration systems. We present Exelixis, a web platform that enables query answering over evolving ontologies without mapping redefinition. This is achieved by rewriting queries among ontology versions. First, changes between ontologies are automatically detected and described using a high-level language of changes. Those changes are interpreted as sound global-as-view (GAV) mappings. Then query expansion is applied in order to consider constraints from the ontology and unfolding to apply the GAV mappings. Whenever equivalent rewritings cannot be produced we a) guide query redefinition and/or b) provide the best “over-approximations”, i.e. the minimally-containing and minimally-generalized rewritings. For the demonstration we will use four versions of the CIDOC-CRM ontology and real user queries to show the functionality of the system. Then we will allow conference participants to directly interact with the system to test its capabilities.

2011/02 ECHALLENGES 2011

P. Gilet, A. Tenschert, R. Kübert, “Use of Semantics and Service Level Agreements for High Performance Computing”

Title: Use of Semantics and Service Level Agreements for High Performance Computing

Abstract: The work described in this paper targets the improvement of the process leading to the creation of Service Level Agreements (SLAs) for High Performance Computing (HPC). This improvement is achieved by means of semantic processing. Additionally, the novel HPC-WSAG schema is used to create SLAs addressing HPC-specific requirements. The scenario of using semantic enrichment to request computing time and resources is described through a specific use case of the plugIT project. The presented approach is applicable to HPC centres in general and makes use of a new SLA specification and an enhanced SLA management approach by considering semantic matching.

2010/11 Workshop on Ontology Dynamics, Inter¬national Semantic Web Conference 2010

H. Kondylakis, D. Plexousakis and Y. Tzitzikas, “Ontology Evolution in Data Integration”

Title: Ontology Evolution in Data Integration

Abstract: Due to the rapid scientific development, ontologies and schemata need to change. When ontologies evolve, the changes should somehow be rendered and used by the pre-existing data integration systems, a problem that most of the integration systems available today seem to ignore. In this paper, we propose a data integration system that enables and exploits ontology evolution. We start by highlighting what is missing from the state of the art and then we sketch the requirements and the architecture of such a system. We proceed further to redefine data integration under ontology evolution and we show how to describe ontology evolution using logs. Then, we provide the algorithms for rewriting queries among different ontology versions and we present an algorithm based on MiniCon that uses these rewritings and that is guaranteed to find the set of maximally-contained rewritings for the sources. Our extension of the MiniCon algorithm does not involve a significant increase in computational complexity and remains scalable.

2010/10 Open Knowledge Models Workshop (at EKAW 2010)

Kondylakis, H.; Plexousakis, D.; Wache, H.; Wolff, D.; Hinkelmann, K.; Bergmayr, A., “Semantic Technology to enable Business and IT alignment.”

Title: Semantic Technology to enable Business and IT alignment

Abstract:
Today we're witnessing the necessity to align Business and Information Technology (IT). Based on the assumption that businesses in different sectors of the economy will require IT for different reasons and in different ways, this paper describes the challenges arising and our solutions concerning that alignment. Our idea is to link human interpretable graphical models - capturing partly semi-formal knowledge- with machine interpretable semantic formalisms to enable: (1) a tighter involvement of domain experts when expressing formal knowledge in the specification of business requirements on IT infrastructure and services, (2) different graphical modelling languages for different views in order to provide modelling languages the domain expert is accustomed to work with. That vision is virtualized in the Semantic Kernel, which is the core element of our Next Generation Modelling Framework

2010/10 Open Knowledge Models Workshop (at EKAW 2010)

Hinkelmann, Knut; Nikles, Simon; Wache, Holger; Wolff, Daniela, “An Enterprise Architecture Framework to organize Model Repositories”

Title: An Enterprise Architecture Framework to organize Model Repositories

Abstract: Models are a valuable knowledge asset for an enterprise. An enterprise model repository can improve sharing of enterprise knowledge and thus can exploit the use of the knowledge for various applications. In this work we present a framework for the organisation of enterprise models. The framework was derived from enterprise architecture frameworks. It distinguishes three dimensions: aspect, perspective, and modelling language family. For each of these dimensions we derive possible values. The framework can be used for enterprise repositories but also for knowledge exchange in a community as proposed by the Open Model Initiative.

2010/10 MODELS 2010

A.Bergmeyer, “ReuseMe – Towards Aspect-Driven Reuse in Modelling Method Development

Title:
ReuseMe – Towards Aspect-Driven Reuse in Modelling Method Development

Abstract:
Today, the construction of individual modelling methods is a commonly accepted practice in different application domains. Method engineers are, however, faced with complexity and high effort involved, especially during modelling language development, considered as one major task when developing methods. To alleviate this, one obvious step is to promote reuse, thereby increasing productivity and quality similar to what can be expected from reuse in software and information systems engineering. Although considerable progress in language modularization and composition is observable, the reuse principle is still rarely adopted in practice. Therefore, in this work, a research roadmap for ReuseMe (Reuse Methods), a novel aspect-oriented reuse approach is proposed. By involving artefacts generated during a method's conceptualization down to its implementation and putting forth fundamental ingredients for a comprehensive method reuse process on top of an Open Model Repository, method reuse becomes leveraged. This paves the way for establishing a library, populated with potential reusable aspects that modularize method artefacts based on separating language concerns.

2010/10 eChallenges 2010

D. Karagiannis, R. Woitsch, V. Hrgovcic, “Business and IT-Alignment: A plug-in approach for Model-Oriented Knowledge Transfer”

Title: Business and IT-Alignment: A plug-in approach for Model-Oriented Knowledge Transfer

Abstract: Today we’re witnessing the necessity to align Business and Information Technology (IT). Based on the assumption that businesses in different sectors of the economy will require IT for different reasons and in different ways, this paper describes the challenges arising and our solutions concerning that alignment. More specifically, we present the Semantic Kernel, which is the core element of our Next Generation Modelling Framework (NGMF). Our idea is to link human interpretable graphical models – partly semi-formal – with machine interpretable semantic formalisms to enable: (1) a tighter involvement of domain experts when expressing formal knowledge in the specification of business requirements on IT infrastructure and services, (2) different graphical modelling languages for different views in order to provide modelling languages the domain expert is accustomed to work with. The research challenges are discussed, and the first prototype of the Semantic Kernel and its application at the use case site is introduced.

2010/07 IEEE Cloud 2010

Daniel Ebneter, Stella Gatziu Grivas, Tripathi Uttam Kumar, Holger Wache “Enterprise Architecture Frameworks for Enabeling Cloud Computing”

Title:
Enterprise Architecture Frameworks for Enabeling Cloud Computing

Abstract: Cloud computing has emerged as a strong factor driving companies to remarkable business success. Far from just being an IT level support solution cloud computing is triggering changes in their core business models by making them more efficient and cost-effective. This has generated an interest for a lot of companies to try and adopt cloud computing for their existing and new business process. In this research we present an approach which a company can use to analyze if its operations can be positively impacted by moving to the cloud. Further we describe our approach using which the company can make that transition to the cloud.

2010/06 Hellenic Data Management Symposium 2010

H. Kondylakis, D. Plexousakis and Y. Tzitzikas, “Ontology Evolution in Data Integration”

Title: Ontology Evolution in Data Integration

Abstract: Due to the rapid scientific development, ontologies and schemata need to change. When ontologies evolve, the changes should somehow be rendered and used by the pre-existing data integration systems, a problem that most of the integration systems available today seem to ignore. In this paper, we propose a data integration system that enables and exploits ontology evolution. We start by highlighting what is missing from the state of the art and then we sketch the requirements and the architecture of such a system. We proceed further to redefine data integration under ontology evolution and we show how to describe ontology evolution using logs. Then, we provide the algorithms for rewriting queries among different ontology versions and we present an algorithm based on MiniCon that uses these rewritings and that is guaranteed to find the set of maximally-contained rewritings for the sources. Our extension of the MiniCon algorithm does not involve a significant increase in computational complexity and remains scalable.

2010/06 2nd International Workshop on Advanced Enterprise Architecture and Repositories

Hinkelmann, K., Merelli, E. and Thönssen, B., “The Role of Content and Context in Enterprise Repositories.”

Title:The Role of Content and Context in Enterprise Repositories.

Abstract: In this paper we show how to utilize enterprise architectures to model relations between information sources of an enterprise. With our approach the enterprise architecture models are not passive entities anymore but can be operationalized to identify dependencies between the various models and to integrate information sources. It thus is the basis for agile enterprises. We show that the elements of an enterprise architecture and the referenced data can take the role of both context and content depending on the objectives of their use. We present two independent approaches for modeling an enterprise ontology.

2010/03 EDBT-ICDT Workshop

H. Kondylakis, Enabling Ontology Evolution in Data Integration, EDBT/ICDT Phd workshop, 2010

Title: Enabling Ontology Evolution in Data Integration

Abstract: Due to the rapid scientific development, ontologies and schemata need to change. When ontologies evolve, the changes should somehow be rendered and used by the pre-existing data integration systems, a problem that most of the integration systems available today seem to ignore. In this paper, we propose a data integration system that enables and exploits ontology evolution. We start by highlighting what is missing from the state of the art and then we sketch the requirements and the architecture of such a system. We proceed further to redefine data integration under ontology evolution and we show how to describe ontology evolution using logs. Then, we provide the algorithms for rewriting queries among different ontology versions and we present an algorithm based on MiniCon that uses these rewritings and that is guaranteed to find the set of maximally-contained rewritings for the sources. Our extension of the MiniCon algorithm does not involve a significant increase in computational complexity and remains scalable.

2009/11 PoEM 2009

R. Woitsch, W. Utz, D. Karagiannis, The IT-socket: Model-based business and IT alignment, POEM 2009

Title:The IT-socket: Model-based business and IT alignment

Abstract:Today we’re witnessing the necessity to align Business and Information Technology (IT) as well as a change in the role of IT from an enabler to an industrial sector in its own right. Based on the assumption that businesses in different sectors of the economy will require IT for different reasons and in different ways, this article introduces the EU project plugIT that aspires to develop an IT-Socket that will realize the vision of businesses “plugging-in” to IT. Three demonstration scenarios deal with: (1) “Certification” of IT infrastructure to stay compliant with regulations, (2) “Virtual Organisation” by evolving the current service orientation to a higher and more business driven abstraction as well as, (3) “Governance” of IT infrastructure introducing business context into highly distributed and complex systems. The IT-Socket follows a model-driven approach by introducing graphical modelling languages as mediators between the domain experts and IT. The research challenge is to link human interpretable graphical models – partly semi-formal – with machine interpretable semantic formalisms to enable: (1) a tighter involvement of domain experts when expressing formal knowledge specifying business requirements on IT infrastructure and services, (2) different graphical modelling languages for different views on the IT-Socket to provide modelling languages the domain expert is used to work with as well as, (3) a domain specific notation for semantics by integrating formal concepts of semantics with the graphic notation from modelling languages.

2009/11 OTM Conferences 2009

H. Kondylakis, G. Flouris, D. Plexousakis: Ontology and Schema Evolution in Data Integration: Review and Assessment.OTM Conferences 2009:932-947

Title: Ontology and Schema Evolution in Data Integration: Review and Assessment

Abstract: The development of new techniques and the emergence of new high- throughput tools have led to a new information revolution. The amount and the diversity of the information that need to be stored and processed have led to the adoption of data integration systems in order to deal with information extraction from disparate sources. The mediation between traditional databases and ontologies has been recognized as a cornerstone issue in bringing in legacy data with formal semantic meaning. However, our knowledge evolves due to the rapid scientific development, so ontologies and schemata need to change in order to capture and accommodate such an evolution. When ontologies change, these changes should somehow be rendered and used by the pre-existing data integration systems, a problem that most of the integration systems seem to ignore. In this paper, we review existing approaches for ontology/schema evolution and examine their applicability in a state-of-the-art, ontology-based data integration setting. Then, we show that changes in schemata differ significantly from changes in ontologies. This strengthens our position that current state of the art systems are not adequate for ontology-based data integration. So, we give the requirements for an ideal data integration system that will enable and exploit ontology evolution.

2009/11/26-27 Knowledge Science, Engineering and Management - KSEM'09 - Vienna/Austria

Hrgovcic, V., Woitsch, R. and Utz, W. (2009): Knowledge Engineering in Future Internet, 3rd International Conference on Knowledge Science, Engineering and Management, (KSEM 09), Vienna, Austria, LNAI Springer

Title: Knowledge Engineering in Future Internet

Abstract: The business and IT alignment acknowledged by enterprises as an important factor in achieving competitive advantage in the market is undergoing slight changes as it faces the challenges and requirements imposed by the Future Internet initiative. The goal to support the adaptation to the challenges and requirements is achieved by applying knowledge engineering (KE) to all three alignment layers: business, the “IT-Socket” and IT layer. Paper introduces approaches to support model transformation and translation – enabling multi-lingual environments on the business layer, the semantic workflow approach – enabling next generation IT infrastructure layer, and the IT-Socket approach for alignment.

2009/11/15-20 The First International Conferences on Advanced Service Computing - SERVICE COMPUTING 2009 - Athens

Woitsch, R. and Utz, W. (2009): The IT-Socket: Model-Based Business and IT Alignment, The First International Conferences on Advanced Service Computing - SERVICE COMPUTING 2009, Athens, Greece

Title: The IT-Socket: Model-Based Business and IT Alignment

Abstract: As a result of the ongoing evolution of IT from an enabler to an industrial sector in its own right – supporting businesses in different sectors of economy on an individual basis – alignment of Business and IT is regarded as a necessary requirement for efficient operation. This paper introduces the EU project plugIT that aims to develop an “IT-Socket” that will realize the vision of businesses “plugging-in” to IT. The results of plugIT are demonstrated within three scenarios from the IT domain that deal with: (1) “Certification” of IT infrastructure enabling regulation compliant operation of IT, (2) “Virtual Organization” aiming at evolving service orientation to a higher abstraction level focusing on business driven aspects as well as, (3) “Governance” of IT infrastructure introducing business context into distributed and complex systems applying the model-driven approach developed within the project. The paper presents the results of the first 6 months of the project within the area of use-case and application scenario development by applying the developed IT-Socket Modeling Framework as a coherent and comprehensive means for externalizing knowledge. The results achieved build up the basis for further research work related to the Semantic Kernel and implementation at the use-case partners within the Next Generation Modeling Framework.

2009/10/21-23 eChallenges e-2009 Conference

Woitsch R., Karagiannis D., Plexousakis, D. and Hinkelmann K. (2009): Plug your Business into IT: Business and IT-Alignment using a Model-Based IT-Socket, eChallenges e-2009 Conference, (eChallenges 09), Istanbul, Turkey, IOS Press

Title: Plug your Business into IT: Business and IT-Alignment using a Model-Based IT-Socket

Abstract: Today we’re witnessing the necessity to align Business and Information Technology (IT). This article introduces the EU project plugIT that aspires to develop an IT-Socket that will realize the vision of businesses “plugging-in” to IT. Three demonstration scenarios deal with: (1) “Certification” of IT infrastructure, (2) “Virtual Organisation” by evolving the current service orientation to a higher and more business driven abstraction as well as (3) “Governance” of IT infrastructure introducing business context into highly distributed and complex systems. The IT-Socket follows a model-driven approach to link human interpretable graphical models – partly semi-formal - with machine interpretable semantic formalisms to enable: (1) a tighter involvement of domain experts when expressing formal knowledge, (2) different graphical modelling languages for different views on the IT-Socket as well as (3) a domain specific notation for semantics by integrating formal concepts of semantics with the graphic notation from modelling languages.

2009/09 ADBIS 2009

H. Kondylakis, M. Doerr, D. Plexousakis: Empowering Provenance in Data Integration. ADBIS 2009:270-285

Title: Empowering Provenance in Data Integration

Abstract: The provenance of data has recently been recognized as central to the trust one places in data. This paper presents a novel framework in order to empower provenance in a mediator based data integration system. We use a simple mapping language for mapping schema constructs, between an ontology and relational sources, capable to carry provenance information. This language extends the traditional data exchange setting by translating our mapping specifications into source-to-target tuple generating dependencies (s-t tgds). Then we define formally the provenance information we want to retrieve i.e. annotation, source and tuple provenance. We provide three algorithms to retrieve provenance information using information stored on the mappings and the sources. We show the feasibility of our solution and the advantages of our framework.

2009/07/31 E&I Journal (Springer)

Woitsch R., Karagiannis D., Plexousakis, D. and Hinkelmann K.: Business and IT-Alignment: the IT-Socket, Elektrotechnik & Informationstechnik, 7-8.2009, Springer

Title: Business and IT Alignment: The IT-Socket

Abstract: Today we’re witnessing the necessity to align Business and Information Technology (IT) as well as a change in the role of IT from an enabler to an industrial sector in its own right. Based on the assumption that businesses in different sectors of the economy will require IT for different reasons and in different ways, this article introduces the EU project plugIT that aspires to develop an IT-Socket that will realize the vision of businesses “plugging-in” to IT.
Three demonstration scenarios deal with: (1) “Certification” of IT infrastructure to stay compliant with regulations, (2) “Virtual Organisation” by evolving the current service orientation to a higher and more business driven abstraction as well as, (3) “Governance” of IT infrastructure introducing business context into highly distributed and complex systems.
The IT-Socket follows a model-driven approach by introducing graphical modelling languages as mediators between the domain experts and IT. The research challenge is to link human interpretable graphical models – partly semi-formal – with machine interpretable semantic formalisms to enable: (1) a tighter involvement of domain experts when expressing formal knowledge specifying business requirements on IT infrastructure and services, (2) different graphical modelling languages for different views on the IT-Socket to provide modelling languages the domain expert is used to work with as well as, (3) a domain specific notation for semantics by integrating formal concepts of semantics with the graphic notation from modelling languages.

2008/12/04 FFG European Research Champions (in German)

Title: Informationstechnologie aus der Steckdose

Abstract: Einheitliche Strukturen und Schnittstellen von unterschiedlichen IT-Anwendungen erleichtern ihre Handhabung und Wartung. Im Rahmen von plugIT soll ein neues System für die Modellierung und das Design von Software entwickelt werden. Ziel ist eine neue „Sprache“ für IT-Anwendungen.