The conceptual architecture and detailed specifications of the integrated COCKPIT Toolkit will focus on loosely coupled architectures to ensure that the constituting components are interoperable across platforms, operating systems and programming languages. A high-level architectural view of the COCKPIT Toolkit is illustrated below.
The COCKPIT Toolkit comprises a list of specific components described in brief below.
The Service Engineering Tool (SE) based on subsets of information which is captured by the tool aims at generating a simulation model which can be imported and executed into the Service Simulation & Visualization Tool (SV). The Service Engineering Tool (SE) will be based in Eclipse itself.
Looking into a potential scenario, a designer would utilize the SE tool to model a public service. Based on a subset of modeling information (i.e from models designed by designer), SE exports a file (simulation model) in a format that can be read by AnyLogic software.
The Service Simulation & Visualization Tool (SV) will utilize AnyLogic tool in order to execute, visualize and possibly tweak the simulation models generated by the Service Engineering Tool (SE).
Looking into a potential scenario, a simulator expert through the use of AnyLogic tool imports and executes service simulation model files generated by the Service Engineering Tool (SE). Anylogic would be a simulation environment one instance of which will run on the COCKPIT common web-server, on which one can deploy and remotely execute generated simulation models. From within AnyLogic the imported file is thereafter exported in ‘jar' format. The generated ‘jar' file is imported as a parameter into the main AnyLogic Java applet of a web-page thus allowing the execution and visualization of any service designed through the Service Engineering Tool (SE). A study is currently taking place regarding the automation capabilities behind the above mentioned process by utilizing a pipeline stream, however there might be technical difficulties since it would involve embedding custom code to aid the simulation.
The tooling approach to service costing and valuation is based on three components. The Service Cost and Value Modeling component (CV/mod) which is integrated with the service engineering tool, prepares service architecture models as well as value categories and cost factors retrieved from public opinion for semi-automated creation of service cost- and -value models on the basis of respective metamodels and blueprints (reusable patterns for combining service models and elements from different levels of abstraction). Cost models are instances of a framework for calculating the TCO of IT services. Value models provide a structure (usually trees) of the different value categories - or qualities - associated with a service (e.g. security) and the possible options of their occurrence (e.g. SSL or Kerberos). CV/mod component is utilized for the design of the cost and value models of a given service which are expressed in the form of formal diagrams. A study is currently taking place regarding the different options for cost and value representation of conceptual as well as formal models. The format of these models is most likely to follow an EMF-based approach in order to enable interoperability with other models (i.e. the service model).
In the value model , value categories are defined with respect to underlying goals. Goals and their value categories are partly defined manually by designers and partly by mining from citizen opinions. The value categories are however not used for cost estimation. The cost model estimates the costs of specific (more technical) service elements such as tasks or processes based on different cost factors. Service valuation is based on pairwise comparison of scenarios (service variants) that specify options and associated costs. Public opinion derived from OM tool influences patterns (in terms of goals and value categories) and architecture (through valuating options/scenarios). It also provides inputs that improve the estimation of costs. CV/mod component could be realized as an Eclipse plug-in if the format of both cost and value models follow an EMF-based approach.
The Service Cost Calculation component (CV/calc) computes cost estimations for a service cost model based on static (fixed cost units) and dynamic (resource consumption over time) cost data. CV/calc takes as input a service cost model from CV/mod and produces as output value category costs and/or an aggregated cost estimation (although the degree of aggregation is still under consideration). Feedback of cost estimations into the service engineering tool might lead into a cost optimization cycle. Cost estimations are also calculated for the different options of service value categories in the service value model.
These value models are used by the Participational Service Valuation component (CV/val) in order to construct “configurators”: representations of the value model that can be used by citizens to find the preferred combinations of service value options (features and qualities) restricted by an overall budget. More concrete, citizens provide weightings of value categories as well as pairwise comparisons of options and the best overall combination of options is computed automatically (based on methods like AHP). The result is a deterministic configuration of the service that can be used to guide the construction of a simulation model. The simulation in turn leads to dynamic cost data that is used to improve the calculation of cost estimations and leads to a cost optimization cycle. CV/val component basically transforms the different types of service value categories (modeled in a value model) into appropriate visual components. Those visual components may be placed on a web-page (of the DP) and can be used by a user to alter the parameters of the service being simulated in SV. CV/val component would be able to interoperate with DP via Web Service calls and present feedback to the end users (via DP) most likely through spreadsheets. A better way would combine the presentation of value categories and options with service visualization. There is a consideration of a 'configurator' that would allow end-users to play around with different scenarios/options constraint by a budget. However, the concrete presentation also depends on the implementation/interfaces of SV.
In principal service blueprints are based on the assumption that a general service model might be implemented in a multitude of ways that differ in their value as perceived by end-users and also in their costs. Additionally the cost and perceived value of a service also depends on other factors like the way that it will be delivered or used. A service blueprint defines a limited set of possible service implementations together with related goals, value categories and cost factors for a common service delivery/usage scenario. The idea is to raise the quality and lower the complexity of cost estimation and valuation based on former experience.
Opinion Mining ( OM )
The Opinion Mining Tool (OM) is comprised primarily of two main elements. The first one is entitled service specific ontology, capturing key concepts and elements of the service (specific per service) and including the not official way of describing (specific per language) in common language. The second element is an XML-based lexicon of positive and negative statements, which are specific for the domain and also specific per language.
The main outputs of the OM Tool will be: (a) The frequency at which people are talking about a service, and (b) Identification of positive and negative statements. The Opinion Mining Tool will need to be trained, under supervision, for the domain of public services in order to learn and be able to identify positive / negative statements from the sources it inspects. Spam filters will be utilized to avoid such information from being evaluated. The details of the spam filters to be used need to be decided.
RapidMiner is an open source system for data mining which will be utilized at the core of the Opinion Mining Tool (OM). A study is currently taking place regarding the approach of OM towards the interactions with the rest of the components. OM aims at creating a dashboard to manage the OM tool. The OM dashboard will search for sources, define specific queries, feed bulk queries, etc. The aim is to have several queries running for each COCKPIT pilot service. OM will deploy a mining – crawling mechanism which will run periodically for each query and each pilot thus identifying and capturing new units of information. The mining – crawling mechanism will store results locally which can be retrieved by other components. The mechanism will support both online and offline scenarios. The saved results will cover a significant time frame (e.g. six months) so as to help identify opinion trends through time.
Policy and Law Retrieval Tool (PL)
The Policy and Law Retrieval Tool (PL) will be a simple document repository supported by a DMBS. Uploading of legal documents can be performed from the Citizens' Deliberative Engagement Platform (with administrative credentials), while the retrieval can be performed either from the citizens or service designers via DP as well. In general, PL will support uploading/downloading of documents and URLs (pointing to external document resources), categorization/grouping (e.g. collection of documents belong to specific legal aspect), and a simple searching functionality.
The Policy and Law Retrieval Tool (PL) should be realized in the form of a web service in order for the Service Engineering Tool (SE) to interoperate with PL through web-service calls and give the end user an integrated experience. Ideally the PL should have a little bit of smart search built into it, so that based on keywords and other attributes we could retrieve corresponding policy and legal documents.
Citizens' Deliberative Engagement Platform (DP)
Deliberation Platform (DP) will be based on DotNetNuke CMS (Open-source).
The ontological model (formal representation of public service) will go far beyond existing and rather conventional formal representations for software application development, defining not only traditional functional and extra-functional properties but also other social and economic considerations including uncertainty, costs, potential value, goals, political considerations and public constraints including legislations and regulatory frameworks. In addition, this ontological model will be endowed with concepts and mechanisms to facilitate co-definition of public services.
It is important to stress that the formal representation will not be a semantic ontology, e.g., specified in OWL or any other semantic web language. While semantics play a pivotal role in opinion mining, domain and language specific semantic ontologies will be developed as an integral part of the opinion mining tools. Instead the formal representation will define its semantic relying on metamodeling, and as such will be an integral part of the Software Engineering Tool. We will rely on the Eclipse Modelling Framework (EMF) to specify the intended meta-model. EMF is an open source framework for developing model-driven applications. By using EMF, we can automatically generating the Java code for graphically editing, manipulating, reading, and serializing data based on the adopted public service formal representation. This allows also, where needed, an automatic generation of software services based on the adopted formal representation.
We currently foresee that the formal representation may be architected as a stratified model, catering the specific concerns of various actors (“viewpoints”), along various dimensions (“views”). In particular, the following three viewpoints have been tentatively defined: the government viewpoint, the citizens' viewpoint, and lastly, the operations viewpoints. Indeed, these viewpoints may be mapped onto the three conceptual layers that this formal representation is based upon (Governmental layer, Citizen interaction layer, and Operations and infrastructure layer). In addition, the formal representation will be compartmentalized into several views. Again, we tentatively define some views:
• The Static/Dynamic View (cf. the blue boxes in Figure 1). These views denote the classical structural and behavioral functional view on a system. While the static view captures the time-independent characteristics of a public service, the dynamic view defines the dynamic behavior of a service over time.
• The NF/Value Views (cf. the purple boxes in the same figure). Firstly, the non-functional view encapsulates public service qualities such as performance. The value view defines a specific type of non-functional view that defines economic value of the public service, her associated pricing and costing model and other value-related concepts.
• Compliance View. The compliance view permeates the other views and will define formal concepts to declare compliance requests, define compliance risks and controls, and apply them to compliance targets (e.g., entities or processes).
• Simulation View. Similar to the compliance view, the simulation view permeates the functional and non-functional components of the formal representation, focusing on the definition and correlation of formal parameters, notably, Key Performance Indicators (KPIs), Process Performance Metrics (PPMs), Service Level Agreements (SLAs) and Quality of Service.
Baseline Architecture of the Formal Representation
Clearly, the views and viewpoints that are defined in this vision document are preliminary in nature. Based on a literature review and driven by the use cases, the architecture of Cockpit's formal representation/metamodel will be further refined, extended and where needed adapted.
:: Please be aware that the content of this page is constantly being updated ::