Solution of the Heterogeneous Data Methodology Chapter

Pages: 30 (8823 words)  ·  Bibliography Sources: 15  ·  File: .docx  ·  Level: Master's  ·  Topic: Education - Computers

SAMPLE EXCERPT . . .
When the mapping has been created, it becomes possible to come up with a relationship between them so as to come up with the mapping of object properties. The property mapping can be achieved by relating the XML schema elements of the objects. In the final stage there are possibilities of exporting the rules of transformation that are generated in accordance to the mapping that is performed. The semantic integration of information is a very costly and difficult task. The use of Semantic web can assist in the process of integrating multiple heterogeneous data schema via the mapping of the schema to a single or multiple ontologies. The main aim of the SIM architecture is to present a common understanding of a given subject and then acquire the heterogeneous systems by means of a Semantic web technology. The whole process is supported by a special ontology schema that offers a semantic representation of data that provides the user to access shared data that is process by tools that are highly automated.

3.2.1.1 The components of the SIM architecture

The Schematic transformationBuy full Download Microsoft Word File paper
for $19.77

Methodology Chapter on Solution of the Heterogeneous Data Assignment

The middleware is responsible for the integration of data. It should present the users with the capability of concentrating on what form of information is needed while leaving the details of how the information is to obtained and integrated while concealed from the users as pointed out by Silva and Cardoso (2006). In a nutshell, the system for integrating data should avail a mechanism to be used for seamless communication with autonomous data sources, perform queries on heterogeneous data sources and aggregate the results into a data format that is interoperable. The main challenge therefore lies in the mechanism of bridging the schematic, syntactic and semantic deficiencies that exists between the sources of data. In other words, the main problem lies in the tackling of the data source heterogeneity problem. As mentioned earlier, three forms of data heterogeneity are possible when integrating data from autonomous, heterogeneous and distributed sources of data. These are; Schematic heterogeneity (schema of the data sources are different), syntactic heterogeneity (the technology used in the support of the data sources are different ) and semantic heterogeneity (in which the sources of data have different nomenclatures, concepts, meanings and vocabulary). The Schematic Transformation module's function is to integrate the data emanating from different sources of data and the resolution of schematic heterogeneity and schematic heterogeneity. The solving of the semantic heterogeneity is carried out by Syntactic-to-Semantic Transformation module.

The architecture

Cardoso (2007) presented the architecture of the Schematic Transformation module (see figure 1). In comprises of two main components; Extractor manager (employed in the connection of different sources of data that are recognized by the system and subsequently performs data extraction operations on them).The fragments of data that have been extracted are then appropriately compiled so as to generate instances of ontology. The second function handled by the is the mapping of the obtained results between the sources of data and the ontology schema. This role is carried out by the mapping module. The intersection of the ontology classes and attributes with the various data sources is what produces this information. The information is used in the formation of data extraction schema to be used by the extractor module in the retrieval of data from the various sources.

The architecture also has a module called Query Handler that receives and subsequently handle the multiple queries from the data sources. The other module is the Instance Generator that provides information regarding any form of error that might occur in the process of data query and extraction. The other very important module is the Ontology Schema that maps the data appropriately.

The module for mapping data

The mapping module makes it possible to map the data source that is located remotely with the ontology that exists on the local machine. The process of mapping the information takes place by the crossing of information between the data sources and the XML schema. The process may give rise to two extraction situations. This is dependent on the characteristics of the data source. The data source may be sing instance (such as a document describing a vehicle model ) or be multiple data records (like a document describing several vehicle models). The situation or rather scenario is what defines the nature of mapping and data extraction.

The Extractor manager module

This module's main function is the handling of data sources used in the retrieval of the raw data as outlined in the query parameters. The techniques of extraction vary according to the data source. This means that the extractor must be able to support different methods of extraction. The mapping and extractor architecture are open so as to allow for the seamless extension of the supported data types, extraction methods as well as languages.

The Schematic of Transformation module accomplishes its role by obtaining the schema of the data to be extracted, then obtaining the definition of the data source. The final step is the data extraction process. After the system processes the query, the system performs data extraction so as to fulfill the query. The process of extraction is carried out on the basis of the attributes. The extractor then retrieves the data using schemas of the desired attributes and hence showing the extractor the mechanism to be used in the execution of the data extraction process. The attributes have an association with the sources of data and they have characteristics of their connections. The extractor must therefore determine how to effectively connect to each and every data source. After the extraction schema is retrieved, the extractor then determines the definition of the associated source of data so as to access it. The process of extraction can then go on. The extraction process is mediated and involves the use of wrappers and extractors.

The generation of instances

The instances are created by a module called instance generator. The module's work is to serialize the format of the output data as well as error handling. The Schematic Transformation module effectively converts the unstructured, semi-structured and structured formats to eXtensible Marker Language (XML). The process of generating XML instances is automatic. This is due to the fact that the information that is extracted conforms to the XML schema.

The handling of queries

The handling of queries is carried out by a special module known as the Query handle module. A query is defined by Suciu (2003) as a process of generically transforming databases. In other words it refers to a function for mapping a relation to another relation. They are the events that put the Schematic Transformation module in on its course. The data input is based on a semantic query language of a higher level. The query is then converted to represent the various requests on the basis of XML elements. The extraction module and the query handler communicate via Syntactic-to- Semantic Query Language (S2SQL) as pointed out by Cardoso (2007). The Syntactic-to- Semantic Query Language (S2SQL) is a much simplified SQL in which the location of data is much more transparent when considered from the query point-of-view.

The transformation of syntactic data to a semantic one

The weaknesses of XML make it necessary to devise better techniques of data integration. This problem can be handled effectively by the adoption of Semantic Web technologies like RDF, OWL and RDFS ontologies. The functions of the ontologies it to come up with as semantic definition to be used in the integration of data. A module is employed to enable the transformation of syntactic information infrastructure that has its definition in the XML file to become a semantic data infrastructure by means of OWL ontology. The module for transforming syntactic data to a semantic one has mapping support and is automated fully.

3.3 The broker architecture

What is context broker?

A broker refers to a central medium that necessitates the transfer of information. A broker is a common address or gateway that is used by various clients in the process of accessing various services. The role of the broker is to interact with the a server application or multiple server applications. The main roles of a context broker are; to receipt of SOAP request form the client applications in XML format. The broker also initiates calls to various server applications. The composed calls include list of arguments for data input as well as the sequencing instructions for calling the server applications.

The aim of this section is to maintain the processes of decision-making in which multimodal information is got from a group of assorted, independent agencies. The field of social care and health has been selected because it is offering practical samples of every problem for which this research is seeking technical solutions. The approach of the IT fraternity is majorly obligatory, top down, big systems of IT which are surrounding every organization that is contributing. The organizations include all-purpose medical practitioners and acute hospitals. Besides,… [END OF PREVIEW] . . . READ MORE

Two Ordering Options:

?
Which Option Should I Choose?
1.  Buy full paper (30 pages)Download Microsoft Word File

Download the perfectly formatted MS Word file!

- or -

2.  Write a NEW paper for me!✍🏻

We'll follow your exact instructions!
Chat with the writer 24/7.

Data Mining Thesis


U.S. War in Iraq Data Driven Pedagogy Essay


Data Replication Research Paper


Enterprise Saas ERP System for Workforce Dynamics Business Proposal


Biochemistry of HNRNA C. And HRALY in Cancer and Normal Cells Using Northern Blots Analysis Thesis


View 200+ other related papers  >>

How to Cite "Solution of the Heterogeneous Data" Methodology Chapter in a Bibliography:

APA Style

Solution of the Heterogeneous Data.  (2011, May 9).  Retrieved July 9, 2020, from https://www.essaytown.com/subjects/paper/solution-heterogeneous-data/23065

MLA Format

"Solution of the Heterogeneous Data."  9 May 2011.  Web.  9 July 2020. <https://www.essaytown.com/subjects/paper/solution-heterogeneous-data/23065>.

Chicago Style

"Solution of the Heterogeneous Data."  Essaytown.com.  May 9, 2011.  Accessed July 9, 2020.
https://www.essaytown.com/subjects/paper/solution-heterogeneous-data/23065.