Abstract
Cloud computing revolutionizes data management by offering centralized repositories or services accessible over the Internet. These services, hosted by a single provider or distributed across multiple entities, facilitate seamless access for users and applications. Additionally, cloud technology enables federated search capabilities, allowing organizations to amalgamate data from diverse sources and perform comprehensive searches. However, such integration often leads to challenges in data quality and duplication due to structural disparities among datasets, including variations in metadata. This research presents a novel provenance-based search model designed to enhance data quality within cloud environments. The model expands the traditional concept of a single canonical URL by incorporating provenance data, thus providing users with diverse search options. Leveraging this model, the study conducts inferential analyses to improve data accuracy and identify duplicate entries effectively. To verify the proposed model, two research paper datasets from Kaggle and DBLP repositories are utilized, and the model effectively identifies duplicates, even with partial queries. Tests demonstrate the system's ability to remove duplicates based on title or author, in both single and distributed dataset scenarios. Traditional search engines struggle with duplicate content, resulting in biased results or inefficient crawling. In contrast, this research uses provenance data to improve search capabilities, overcoming these limitations.
Original language | English |
---|---|
Article number | e13600 |
Journal | Expert Systems |
Volume | 42 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2025 |
Bibliographical note
Publisher Copyright:© 2024 John Wiley & Sons Ltd.
Keywords
- cloud computing
- duplicates identification
- inferences
- provenance
ASJC Scopus subject areas
- Control and Systems Engineering
- Theoretical Computer Science
- Computational Theory and Mathematics
- Artificial Intelligence