DESIGN AND IMPLEMENTATION OF A CROSS-PLATFORM DATABASE SYSTEM FOR SQL AND NoSQL INTEROPERABILITY

Amount: ₦5,000.00 |

Format: Ms Word |

1-5 chapters |




ABSTRACT

Relational databases have been the norm in the Information Technology (IT) industry because of the ease of operation when low-cost servers were considered to be extremely powerful. However, with the advent of web 2.0 technology and expansion in computer growth, these databases were unable to sustain most major demands like speed, scalability, reliability, continuous availability, cost reduction and location independence.As a result of this inadequacy, the NoSQL systems were built to combat this complexity. The NoSQL systems were unable to fully address this problem. In order to address these problems, an enhanced data distributed system is desired. The research focused on design and implementation of a cross-platform database system for Structured Query Language (SQL) and “Not only SQL” (NoSQL) interoperability. Iterative and Incremental methodology was adopted. A conceptual framework involving a new high level model and architectural layout was formed for exploring strength of the database management system. Interoperability of five database engines (MySQL, MongoDB, Cassandra, Redis and for inclusiveness Neo4j) was done using PHP scripting language and parkers for uniform interactions across the database engines and the layer. Test operations for Create/Insert, Read, Write and Delete were carried out using 500, 5,000 and 10,000 records respectively for the individual systems. The experimental results showed test results for the individual databases; MySQL, MongoDB, Cassandra, Redis and Neo4j graph NoSQL system based on speed, performance and ability to move data about. Qualitative and quantitative metrics were deployed for evaluating the performance of seven different hybrid models. In a read-intensive environment, MySQL and MongoDB performed better than the other systems while Cassandra performed better than the other systems in a write-intensive environment. The tests showed Redis system to be best for insert operations as it performed insertions faster than the other systems involved. MySQL and MongoDB were also discovered to be very good for performing deletions in the database. The strengths and weaknesses of these systems were ascertained through these tests and with this knowledge; an integrated, distributive, interoperable cross-platform database system with better throughput was built. The hybrid system (NoSQL2) developed did not only solve the problem of movement of data and speedy access to information domiciled in the database but offered effective and automatic load balancing from the different database engines.

CHAPTER ONE

INTRODUCTION

  1. Background of the Study

Today, the central part of every organization is data (Goyal et al., 2016). Owing to numerous database choices obtainable, each organization uses varied databases, each database is specific to the problem they are trying to resolve. With the innovations in the Information Technology (IT) industry, the use of Big Data, cloud computing, Web Applications and the Internet of things, the nature of demands for these databases are constantly changing. One of such demands is the need not only to manage data which is high in velocity, volume, veracity and variety but also to grant speed, scalability, reliability, continuous availability, and cost reduction and location independence (distribution) at the same time. The existing popular relational databases and some popular hybrid databases fell short of delivering these demands and so, most organizations are migrating from relational to semi and unstructured databases (Goyal et al., 2016). According to Lawrence (2014) ―although most NoSQL systems do not support SQL, there is no fundamental reason why they could not. The ―NoSQL‖ label is a misnomer. The value of these systems has nothing to do with SQL support, but rather on their different architectural design decisions in order to achieve scalability and performance‖. Lawrence (2014) further noted that NoSQL systems have been propositioned to deal with applications and problem field inadequately served by relational database management systems. These domains are mainly Big data domains involving web data such as supporting millions of interactive users or performing analytics on terabytes of data such as weblogs and click streams. The data domiciled in this field of analytics is semi-structured, variable and massive.

The NoSQL systems which are open source, due to their scalable nature and design model are built to handle larger data volumes at better performance than relational systems. Another advantage of the NoSQL system is that it offers better support for programmers. Systems like MongoDB permits a Javascript Object Notation (JSON) object to be stored which can be readily converted into Javascript objects in code (BSON; binary encoded JSON). This singular advantage makes the knowledge of SQL (Structured Query Language) unnecessary for many programmers as a result of the simplicity and flexibility offered by NoSQL management systems (Lawrence, 2014).

Increasing output of data (structured, semi-structured and unstructured) have caused an upheaval in the database community. This disruption in data management stems from lack of uniformity both in syntaxes and the query language which are used to access the various database systems. Researches both in the academia and the industry have been underway to find a lasting solution for the integration/interoperability between SQL and NoSQL systems. This is as a result of the overhead which is accrued from running single APIs (Application Programming Interface) on different NoSQL systems. Also there is need for skill in the area of handling this single APIs and has called for a search into the possibility of handling different systems efficiently. Data storage has become of increasing need which has resulted in major industries, institutions and establishments seeking out their own solutions for handling big data efficiently. The available solutions so far have only addressed this issue partly and left serious gaps. Some of the gaps identified are security issues as a result of third party involvement in cloud computing, near inconsistency of the database, heterogeneous data structure and also different APIs as a result of varying query model used in the NoSQL systems which lack uniformity. These gaps, when filled, will be a major breakthrough in the field of NoSQL Database Management System.

This need for big data management has called for systems which are persistent and can handle data concurrency. According to Lombardo et al., (2012), the scalability of NoSQL systems in the cloud computing field have made them an interesting dimension in IT which holds prospects for improvement as regards query optimization and execution. Though there are draw backs attending the cloud/NoSQL technology, the systems are of optimum performance individually but with proper implementation of algorithms, could be improved. This could be in form of a merge of similar systems to ascertain functionality. Furthermore, the NoSQL systems are heterogeneous in their functionality which is as a result of different query languages, different consistency models and also the difference in the CAP theorem adopted by the various systems used (Dharmasiri and Goonetillake, 2013). These various features of the systems leave a lack of proper cohesion between the systems involved which is mainly due to lack of a standard query language.

According to Gadkari et al., (2014), the increasing amount of data and database accesses has rendered the traditional database systems inefficient for the purpose of big data management due to space requirements. This need for systems that can handle large volumes of data which are

both non-structured/semi-structured has driven researchers and the industry into a viable option which is the NoSQL Database Management System. NoSQL in its lay term stands for ‗Not Only SQL‘. Ramanathan et al., (2011), described the recent migration for production purposes by some leading brands like Amazon and Google from relational to NoSQL databases to be as a result of its ability to handle unstructured data such as word-processing files, e-mail, multimedia, and social media efficiently. Various NoSQL systems have been developed to handle data of different formats and the four most widely known are; Key-value pair, Document-store, Column- share and Graph databases. These NoSQL systems handle data of different structures but though they have been able to meet different database needs, there are still issues attributable to the use of these various systems.

According  to  Cure  et  al.,  (2012),  ―NoSQL  covers  a  wide  range  of  technologies  and  data architectures for managing web-scale data and having the following common features: persistent data, non-relational data, avoid join operations, distribution, massive horizontal scaling, no fixed and flexible schemata, replication support, individual usually procedural query systems rather than using a standard declarative query language, consistent within a node of the cluster and eventually consistent across the cluster and simple transactions‖.

The SQL systems are highly structured in their functionality using rows and columns which are arranged in well-defined tables. Join operations are noted as one of the main features of the RDBMS (Relational Database Management System) which outputs data that have functional dependency on the key and which is well normalized to eliminate data redundancy and guarantee integrity of data (Gadkari et al., 2014). As stated by Hecht and Jablonski (2011), the normalized data model and full ACID support of the relational/SQL DBMS are not suitable for dealing with the web 2.0 technology. This is because web 2.0 technology requires or deals with very high volumes of data which come in peta- tera, yota, zettabytes and demands massive/concurrent read-write access which should be responded to within low latency. The joins and lock of the relational database model while ensuring data integrity is not suitable for distributed systems where data are replicated and spread across multiple nodes as a result of need for high availability of data.

Though these management systems are not very closely related and are specialized to perform specific tasks, there are some similarities in their build up. The column-share NoSQL systems

display data in columns which is similar to the SQL Database Management System (DBMS). According to Lombardo et al., (2012) the column-share NoSQL database model is the most closely related to the SQL management system as it confines data into columns in different tables and also supports the vertical partitioning storage model.

A common ground (between SQL and column-share) could be reached in terms of data and query model. An integration of these two management systems; SQL and NoSQL which have various high points and limitations could give a system which is both highly consistent and with high throughput/availability.

Big Data Analysis

Big data as defined by Qi et al., (2014) is a term which encompasses various techniques which are used to capture, process, analyze, and visualize potentially large sets of data within a judicious time frame which are not available to regular IT technologies. Big data management requires combination of different techniques which when properly managed allows for effective data management. There are three basic features of Big data which are Volume (the amount of data involved), Velocity (the rate at which data is generated) and Variety (the heterogeneous nature of data involved). As shown in Figure 1.1, there are different sources of Big data which spring from different bases or context like social media, multimedia and also cloud technology. Some works have been carried out on integration of SQL and NoSQL systems. Individual NoSQL systems like graph, document-store and column-family have been made interoperable with the SQL platform. Different works have been conducted in this field of integration of SQL and NoSQL systems. These already existing works have not performed integration of all the various NoSQL models which are; key-value pair, column-share, document-store and graph systems with the traditional relational model which is SQL (Structured Query Language). Doshi et al.,(2013) classified data growth into vertical, chronological and horizontal. A solution which blends SQL with NoSQL by use of data integrator which first sifts updates in RDBMS and channels it into NoSQL was used. A framework named Hibernate-OGM used to reduce the programming complexity of the NoSQL was used to aid the relational model. This framework has the problem of handling data at good speed.

Liu and Vitolo (2013) explained that the concept of ‗Graph Cube‘ is proposed as a design which integrates graphs with tables. This concept serves as a prototype which is the basis of a graph data warehouse. Some DML (select, insert, update and delete) and DDL (create, drop and update) in SQL is synchronized with the graph data model to give GDML (Graph Data Manipulation Language) and GDDL (Graph Data Definition Language). This model allows for views in the graph data warehouse.

As stated by Kaur and Rani (2013) graph databases represent data in their natural format using graphical forms which shows better representation rather than tabular forms. This eliminates impedance mismatch (incompatibility of database with programming language) which is the problem mostly associated with the object-relational systems which store data in tabular form. The graph DBMS as explained by Indrawan-Santiago (2012) faces the most challenge amongst the NoSQL systems as it does not support horizontal partitioning which is a drawback to this model. As described by Gudivada et al., (2014) graph database is highly scalable and has persistent in-built memory which can be used for clustered systems for both single and distributed data centers. The interoperability of both the Graph system and NoSQL system has caused an improvement in data management technology.

Lawrence (2014) worked on SQL and NoSQL integration using MongoDB and MySQL. The architecture of the system is based on the construction of a JDBC driver which accepts SQL queries through the use of SQL parser which produces a parse and relational operator tree. This process is made possible through a virtualization Execution Engine which serves as the middle- layer accepting and translating information across the two management systems.

This research seeks to serve as furtherance of the work carried out by Lawrence (2014) at the University   of   British   Columbia   Canada   where   he   stated   that   ―Future   work   involves benchmarking the performance of other supported NoSQL systems such as Cassandra. We are also working on parallelizing the virtualization engine for a cluster environment.‖ Also, Doshi et al (2013) proposed further work using mixed approaches in blending SQL IMDBs with distributed, fault-tolerant NoSQL systems which demand a combination of in-memory and massively parallel computing.

1.1  Statement of the Problem

Presently, there has been so many demands by users for improvement in terms of efficiency of databases to handle large data effectively. The existing databases fell short of delivering these demands in the area of effective database management. These problems are inefficiencies in the area of speed, scalability, reliability, continuous availability, cost reduction and location independency.

Some problems have been identified by this research and they include;

  1. An enhanced hybrid system capable of accommodating structured, semi-structured and unstructured datasets.

Scherzinger et al., (2013) explained that ―what is missing in today‘s frameworks is a means to systematically manage the schema of stored data, while at the same time maintaining the flexibility that a schema-less data store provides‖. Big data management poses a serious obstacle in the IT research and industry domain in terms of the size (volume of data), variety (semi- structured/unstructured) and velocity (speed at which data changes) of the data involved. Increasing amount of data produced has rendered the traditional (SQL) systems incapable of handling these volumes of data both in speed and structure. As a result of this incapability of

SQL systems to handle different structures of data, vast volumes of data cannot be accommodated.

  1. A unified interface (integrated system) that has speed and allows seamless movement of large datasets or Big data with automatic load balancing capability.

According to Srivastava & Shekokar (2016) Big Data has lots of issues and challenges such as storage, efficient access, and data analytics and data security. Research in Big data domain which has been broadly segregated into 3 main categories; 1) infrastructure domain, 2) data processing and 3) analytics, is based predominantly on the huge volume of data which has created difficulty in providing good infrastructure with 100% uptime. Lots of cloud service providers and distributed frameworks are claiming to provide good elasticity and reliability but lots of challenges still prevail in terms of efficient load balancing and security issues.

  1. An effective system optimized for analytics and offer of good infrastructure.

Zafar et al., (2016) opined that the performance of NoSQL can be quantified by numerous features such as the architecture, data model, query languages, consumer API, ease of use and scalability; data in NoSQL is typically collected from different foundations and need to be handled in real time. As such, NoSQL systems do not support ACID (atomicity, consistency, isolation and durability) principle, but have a shared and fault tolerance design. As stated by Li and Gu (2019) new systems have been developed to combat this major data issue but the problem still persists. This problem is in terms of movement of data across different systems which are as a result of the different APIs employed by the different NoSQL developers. Availability of data is at the forefront of the NoSQL management system which leaves consistency of data a secondary consideration to this architecture. SQL stands for consistency and integrity of data but yet lacks the ability of horizontal partitioning/scalability as it partitions vertically. These various systems; the SQL and different NoSQL systems have their strong attributes and a proper integration has been carried out by this research and avails the research and industry with a system reliable for parsing data across the different systems involved and also provides high consistency and throughput.

1.2  Aim and Objectives of the Study

The aim of this work is to design and implement a Cross-platform Database system for SQL and NoSQL interoperability.

The specific objectives of the work are to:

  1. develop an interoperable system that will increase the speed and movement of data.
  2. design an integrated system capable of accommodating vast volumes of data.
  3. design a novel methodology for exploiting strengths of query languages.
  4. test for consistency, availability and improvement using test–case scenarios of various systems to ensure uniform communication and operation.

1.3  Significance of the Study

―In practice, there are situations in which storing data in the form of a table is inconvenient, or there are other kinds of relationship between records, or there is the necessity to quickly access the data. A NoSQL database provides a mechanism for storage and retrieval of data that is different from the typical relational model‖ (Stanescu et al., 2016).

This work is important to organizations and big corporations as it offers an interoperable system which is both fast and reliable. Data can be moved around and understood across board for the relational, NoSQL/NewSQL platforms. This new hybrid system offers full scalability, cost effectiveness, compatibility and also a consistent and available repository/storage facility.

As posited by Cure et al., (2012), solutions in the NoSQL ecosystem are emerging in various domains such as social, scientific and even financial applications. Nevertheless, many actors consider that in order to increase the adoption rate of the NoSQL database, NoSQL systems need to integrate some new features and these desired features correspond to the ones found in RDBMS (Relational Database Management Systems). These features were identified as the ones which are concerned with more schema, more declarative query languages and business intelligence (BI) processing. The researchers further argued that the integration of these features needs to consider the semantics of the elements of the application domain which could be a major breakthrough for both NoSQL stores and the semantic community since RDBMS is not really reactive in integrating semantics.

Lawrence (2014) put forward three primary reasons for supporting SQL querying of NoSQL systems 1) SQL is a declarative language that allows descriptive queries while hiding implementation and query execution details. 2) SQL is a standardized language allowing portability between systems and leveraging a massive existing knowledge base of database developers. 3) Supporting SQL allows a NoSQL system to seamlessly interact with other enterprise systems that use SQL and JDBC/ODBC without requiring changes. A cross-platform system which allows for interoperability between different NoSQL systems and SQL system has not been fully implemented which is the purpose of research. Though there have been various works produced by different researchers linking different SQL systems to their NoSQL counterparts, a concise interoperable system between these two school of thoughts is yet to be realized. This is the basis of this project; to design and implement a functional system which allows parsing of data across board for both the SQL and NoSQL database management systems. According to Lawrence (2014) the key advantage of supporting SQL is to allow for system portability.

This research came up with novel methodology which is a hybrid methodology for database management called ‗Holistic Iterative and Incremental Approach for cross-platform modeling. This is an improvement on the already existing Iterative and Incremental model. This idea is as a result of the need to iteratively evaluate the data for both consistency and availability within a distributed  data  network.  In  the  words  of  Kepner  et  al.,  (2016),  ―it  is  now  recognized  that special-purpose databases (such as the system proposed by this study) can be 100 times faster for a particular application than a general-purpose database‖.

1.4  Scope of the Study

This research covered a broad aspect of SQL and NoSQL database systems. The work looked at the various conventions/drivers which the different systems run on. SQL runs on relational algebra (using intersections and unions to perform joins) which is how the relations are put together. Polyglot persistence has been known to form a major part of how NoSQL systems run which is the combination of various techniques and technologies in managing and outputting data/information. The research was used to find a common ground between these two unique systems and how they efficiently parse/exchange and accommodate data within a unified

interface. The study integrated an enhanced, functional and interoperable cross-platform database with both SQL and NoSQL peculiarities which is unambiguous.

1.5  Limitations of the Study

There are various limitations attending this study; at the foremost are time and tools to be used. The work is time-constrained as the research is required to be started and completed within a given time frame.

The researcher does not have direct access to some of the tools which will be used for the work and will depend on the limited access edition of the open source. Various techniques which will be employed for example the SQL server and client-side RoboMongo for MongoDB cannot be accessed through the University and will be sourced through external means. There are also limited existing research works on this area.

1.6  Definition of Terms

According to Darwen (2010), a Database is an organized, machine-readable collection of symbols, to be interpreted as a true account of some enterprise. A database is machine-updatable too, and so must also be a collection of variables. A database is typically available to a community of users, with possibly varying requirements.

Relational Database: – a database whose symbols are organized into a collection of relations (Darwen, 2010).

RDBMS: – Relational Database Management Systems are based on the relational model defined by a schema. This model uses two concepts: table and relationship. A relational table represents a well defined collection of rows and columns and the relationship is established between the rows of the tables. Relational data can be queried and manipulated using SQL query language and the RDBMS is mostly used for financial records, manufacturing information, staff and salary data and so on (Stanescu et al., 2016).

Relation: – a formal term in mathematics –in particular, in the logical foundation of mathematics which appeals to the notion of relationships between things or any number of things (Darwen, 2010). ―A mathematical definition of database tables sufficient for their representation without constraining their implementation‖ (Kepner et al., 2016).

Relational Algebra: – an algebra with the primary purpose of providing a collection of operations on relations of all degrees (not necessarily binary) suitable for selecting data from a relational database. The relations to be operated upon are assumed to be normalized; that is, the domains on which they are defined are simple (Codd, 1972).

Relational algebra: – a set of mathematical operators that operate on relations and yield relations as results (Darwen, 2010).

Relational Table: – According to Stanescu et al., (2016) the relational table represents a well defined collection of rows and columns which has relationship established between the rows of the tables. The data in the relational table can be queried and manipulated using SQL (Structured Query Language).

NoSQL: – systems developed to support application not well served by relational systems, often involving  Big  Data  processing  (Lawrence,  2014).  ―Typical  applications  of  NoSQL  databases involve many important areas, such as the Internet, mobile computation, telecommunications, Bioinformatics, education, and energy‖ (Li and Gu, 2019). Cure et al., (2012) explained that NoSQL covers a wide range of technologies and data architecture for managing web-scale data and having the following common features; persistent data, non-relational data, avoid join operations, distribution, massive horizontal scaling, no fixed and flexible schemata, replication support, individual usually procedural query systems rather than using a standard declarative query language, consistent within a node of the cluster and eventually consistent across the cluster and simple transactions. Zafar et al., (2016) also defined NoSQL as systems that do not support ACID (atomicity, consistency, isolation and durability) principle, but have a shared and fault tolerant design.

NewSQL: – an inclusive term to refer to a clustered, non-RDBMS system where the distinction between NoSQL and NewSQL is not important (Doshi et al., 2013).

Polyglot Persistence: – is a multi-database approach of introducing different database approaches in an application. Polyglot persistence allows one database to process one part of application and use same data which is used by different database for processing another part of same application (Scott Leberknight, 2008). Srivastava and Shekokar (2016) defined polyglot persistence as where different modules can have their own different data processing mechanism,

can solve the problem of scalability and processing unstructured data as well as maintaining legacy processing.

Big Data: – Zafar et al., (2016) defined Big data as data which have unlimited quantity of datasets and is very complex to collect and store. Srivastava and Shekokar (2016) also defined Big data as numerous sources which are creating large volumes of data such as E Commerce, search engines, IoT (Internet of Things), mobile phones, satellites etc. Big Data is not only large volume; it also includes Velocity; data changing at a very high speed, and Variety; inclusion of structured, semi-structured and unstructured data.

IoT (Internet of Things):- network of physical devices, vehicles, home appliances and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these objects to connect and exchange data (Srivastava and Shekokar, 2016).

DBMS (Database Management System):- a piece of software for managing databases and providing access to them. A DBMS responds to commands given by application programs, custom-written or general purpose, executing on behalf of users. Commands are written in the database language of the DBMS, for example; SQL (Darwen, 2010).

ACID: – Atomicity, Consistency, Isolation and Durability properties is used by the RDBMS (Relational Database Management System) for its operations on data. The ACID ensures a reliable performance whereby transactions are correctly changed in the database. When a transaction modifies a value in the database, other database consumers are able to see the same value that was updated (Zafar et al., 2016).

Normalization: – is a logical approach of decomposing tables to eliminate data redundancy (repetition) and undesirable characteristics like insertion, update and deletion anomalies. It is a multi-step procedure that places data into tabular form, thereby eliminating replica data from the relational tables. Normalization is employed for two main purposes; 1) to eliminate redundant (useless) data. 2) To ensure accurate data dependencies (Kanade et al., 2014).

BASE: – Basic Availability, Soft State, and Eventual consistency properties are used by the NoSQL database for the operations. Basic Availability means apparent availability of data. When a single node fails, the data is partially inaccessible but other data layer remains functional. Soft

State event implies that the data can be changed over time despite the lack of input because such ability may ensure consistency (Zafar et al., 2016).

PACELC:- Partition, Availability and Consistency, Else, Latency and Consistency theorem inculcates all the characteristics of the NoSQL management system, due to the replication of data in NoSQL, a failure goes unnoticed as replica servers generate the exact information on the failed node. The NoSQL system considers when there is partition, availability and consistency of the database and when the system performs normally without partition, it considers latency and consistency (Indrawan-Santiago, 2012).

CAP theorem: – stands for Consistency, Availability and Partition tolerance. Professor Eric Brewer in 2000 proposed CAP theorem. According to Benefico et al., (2012), shared data systems have properties such as, consistency, availability and partition (CAP). This theorem states that distributed databases cannot have both consistency and availability. The CAP model grants that in a joint-data scheme, only two out of the three features can be satisfied at a particular point in time within the database. There are three possible configurations which are; consistency and partition tolerance, availability and partition tolerance and the last consistency and availability which is very difficult to combine (Bonnet et al., 2011).

Query: – an expression that, when evaluated, yields some result derived from the database. Queries make databases useful. A query is not of itself a command. The DBMS might support some kind of command to evaluate a given query and make the result available for access, also using DBMS commands, by the application program. The application program might execute such commands in order to display result (usually in tabular form) in a window (Darwen, 2010).

Virtualization Technology: – is an efficient method of scheduling hardware, software, data, and network, storage of the application system from each other, which breaks the physical device obstacles in the data centers, servers, storage, networks, data and applications and implements the dynamic architecture. It also provides the centralized management and dynamic use of physical resources and virtual resources, and improves the elasticity and flexibility. It can also promote the service and manage the risk (Jiang et al., 2013).

Sharding: – the introduction of new node (s) to the current system which will not affect the system performance or shutdown the system but increase the capability of the system whereby

data is distributed over the node and is arranged in a non-overlapped form. These nodes can be operated on different systems and as such, each node can independently eliminate source dispute (Zafar et al., (2016).

JSON: – JavaScript Object Notation (JSON) is a text based open source application to transfer information between server and client. JSON is derived from JAVASCRIPT. On the other hand, a binary-coded serialization (BSON) is a flavor of JSON. However, BSON is more flexible than Protocol buffer (Google‘s language neutral, platform-neutral, extensible mechanism for serializing structured data; useful in developing programs to communicate with each other over a wire or for storing data) but commonly employed as effective space management method (Zafar et al., 2016).



This material content is developed to serve as a GUIDE for students to conduct academic research


DESIGN AND IMPLEMENTATION OF A CROSS-PLATFORM DATABASE SYSTEM FOR SQL AND NoSQL INTEROPERABILITY

NOT THE TOPIC YOU ARE LOOKING FOR?



A1Project Hub Support Team Are Always (24/7) Online To Help You With Your Project

Chat Us on WhatsApp » 09063590000

DO YOU NEED CLARIFICATION? CALL OUR HELP DESK:

  09063590000 (Country Code: +234)
 
YOU CAN REACH OUR SUPPORT TEAM VIA MAIL: [email protected]


Related Project Topics :

Choose Project Department