0% found this document useful (0 votes)
53 views6 pages

The Study of Data Modeling Methodologies For Column-Oriented Databases

The document discusses two methodologies for data modeling in column-oriented databases: Chebotko methodology and Poffo methodology. It compares the two approaches and tests the read time of data models created with each methodology on a case study. The test results show that the data model created using Poffo methodology retrieves data 2.24 times faster than the one created using Chebotko methodology.

Uploaded by

yizhes2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views6 pages

The Study of Data Modeling Methodologies For Column-Oriented Databases

The document discusses two methodologies for data modeling in column-oriented databases: Chebotko methodology and Poffo methodology. It compares the two approaches and tests the read time of data models created with each methodology on a case study. The test results show that the data model created using Poffo methodology retrieves data 2.24 times faster than the one created using Chebotko methodology.

Uploaded by

yizhes2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

The Study Of Data Modeling Methodologies For

Column-Oriented Databases
Raden Haryo Pandu Prakoso Fazat Nur Azizah
School of Electrical Engineering and Informatics School of Electrical Engineering and Informatics
Institut Teknologi Bandung Institut Teknologi Bandung
Bandung, Indonesia Bandung, Indonesia
23520052@std.stei.itb.ac.id fazat@itb.ac.id

Abstract— Further research regarding databases suggest Regarding said opportunity, there are many research that
that applying data modeling in NoSQL databases as in SQL propose NoSQL database design methodologies, including
databases could improve its performance. This discovery leads those that specifically include conceptual models on their first
to the birth of many research regarding methodologies for data steps. Some of them use ER/EER diagrams, entity graphs,
modeling in NoSQL databases, including ones for column- UML diagrams, and others. This research focuses on a specific
oriented databases. Two column-oriented databases data approach to column-based NoSQL database implementations
modeling methodologies that were discussed in this research that use EER diagrams, both are Chebotko methodology [9]
2023 IEEE International Conference on Data and Software Engineering (ICoDSE) | 979-8-3503-8138-2/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICODSE59534.2023.10291656

that use EER diagrams for their conceptual models are and Poffo methodology [10] which utilize two completely
Chebotko methodology which also considers application queries
different approaches. This research aims to compare the read
for its database and Poffo methodology which depends only on
its EER diagrams. Thus, the purpose of this research is to
time of two data models that were created using those two
analyze the difference between both data modeling methodologies.
methodologies in column-wide oriented databases. The testing The rest of this paper is organized as follows. Section II
results show that on the chosen case study, the physical data analyzes related work with emphasis on Chebotko and Poffo
model made using Chebotko methodology causes queries to methodologies. Section III introduces the case study used in
retrieve data 2.24 times slower than the ones made using Poffo this research, presents the physical model of both
methodology. In conclusion, modeling data in column-oriented
methodology according to the said case study, and briefly
databases using Poffo methodology is generally safer because
conveys the development environment used. Section IV is
the size of its corresponding column families could remain small,
whereas one would have to exercise caution while modeling data
dedicated to the results of the testing conducted in this
using Chebotko methodology because the column families research and presents its analysis. Finally, Section V contains
created could be too big due to inquiring too much information the conclusions of this research.
in a query.
II. RELATED WORK
Keywords—Chebotko methodology, column-oriented A. ER/EER Diagram
databases, data modeling, EER diagrams, Poffo methodology
The concept of ER diagram was first introduced by [11]
I. INTRODUCTION which consists of strong and weak entity sets, relationship sets
and its cardinalities, primary keys, and attributes. Then [12]
It has been widely known that NoSQL databases is better indirectly introduced the concept of aggregation and
compared to the traditional RDBMS in terms of scalability, generalization into ER diagrams. Afterwards, [13] clarified
flexibility, and performance [1] [2] and thus better in the concept of ECR (Entity-Category-Relationship)
managing big data. There are four types of NoSQL databases,
introduced by [14] by adding an IS A concept to illustrate
one of which is column-oriented databases which store data
generalization/specialization concept of ER diagrams. They
not only in relational forms, but also in the form of distributed
also introduced the concept of total participation and
architecture, for example, Apache Cassandra.
functional participation and also union and disjoint
Albeit NoSQL databases implementations do not require generalization. Later on, the term EER was used in [15] and
any sort of modeling like in SQL databases, NoSQL databases elaborated further in [16] which includes differentiating
developers still need to define implicit schema, which is a set primary key attributes from the rest by underlining or coloring
of assumptions regarding the data’s structure in the code that it and introduced the concept of multivalued attributes,
manipulates the data [3], for example the column ‘qty’ is often although the notation was not illustrated yet. As the time goes
inferred to ‘quantity’ so that its type is integer. With regards by, several types of attributes were introduced, namely
to implicit schema in NoSQL databases, there are several composite attributes, identifier attributes in weak entities,
research that discuss the importance of modeling in NoSQL identifying relationship sets, and disjoint notation. The most
databases due to its heterogeny nature. In [4], it is mentioned recent EER diagram notations can be found in [17] which also
that data modeling in NoSQL databases is still necessary to introduced the concept of aggregation, i.e. the concept of
understand and demonstrate its data storing capabilities. In unifying several entities into one big entity, be it entities
[5], it is argued that data design in NoSQL databases affect its involved in binary, ternary, or even quarternary relationships.
scalability, consistency, and performance. In [6], it was stated
B. Chebotko Methodology
that it is important to have a conceptual model that represents
various NoSQL databases so developers can choose the One of the two methodologies discussed in this research is
necessary NoSQL physical data models more easily. In [7], it one methodology specifically designed for Apache Cassandra,
was claimed that the schema chosen by developers in terms of which was introduced in [18]. For its conceptual model,
column families in column-oriented databases affects its besides relying on its EER diagrams, it also relies on the
performance, and lastly in [8], it was told that the absence of planned application queries that was going to be executed for
conceptual data models in NoSQL databases give a great the database and this means that only elements of the EER
research challenge in data warehouse design step later on. diagrams which information is inquired in the planned queries

979-8-3503-8138-2/23/$31.00 ©2023 IEEE


238
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.
will be implemented in Cassandra databases. One of the main notations represent that the corresponding column family is a
conversion rules is that if one is searching for an entity (ET1) weak entity in the EER diagram.
by another entity (ET2), then then ET1 primary keys will be
partition keys in the corresponding column family if used in
an equality search (using ‘=’ sign) and the ET2 primary keys
will be clustering keys alongside the attributes that are used in
an inequality search (using like ‘<’ or ‘>’ signs). For its logical
step, they introduced the Chebotko notation shown by Fig. 1,
Fig. 3. Example of logical notation in Poffo diagram
which includes data types like collection, static, and UDT
columns. UDT or User-Defined Type column is used mainly
Referring to its EER diagrams, here are the conversion
when a column is a composite attribute in its EER diagram,
rules of Poffo methodology. The ones which become column
like an address and name attributes. The UDT logical notation
families are both strong and weak entities, relationship sets
in Chebotko diagram is shown by Fig. 2 which shows an
which have ‘many’ sides (1:n and n:m), relationship sets
encoding type and an actor_name type.
which have attributes, and composite attributes. The ones
which become columns are primary keys, regular attributes,
attributes of composite attributes, and primary key attributes
of entities which connected to relationship sets. Primary key
attributes of entities which exist in the ‘one’ sides of
relationship sets will be partition keys in its physical model
while the primary key attributes of entities which exist in the
‘many’ sides of relationship sets will be clustering keys in its
physical model. And lastly, entities that exist on the ‘many’
sides of relationship sets must store the value of the primary
key attributes of the entities which exist on the ‘one’ sides of
said relationship sets which are illustrated by the caret sign in
its logical notation.
III. MODELLING THE STUDY CASE USING BOTH
METHODOLOGIES
Fig. 4 shows the study case used in this research, adapted
from [20]. This study case was chosen because it consists of
Fig. 1. Chebotko diagram notation almost all EER diagram elements introduced, but for this
research, an aggregation was added. In short, an employee has
a job, works for a department which controls several projects,
works on several projects, is supervised by another employee
while that supervisor can also supervise several other
employees, has at least one dependent, categorized into one of
the following employee type: permanent, contract, and casual
employee, and a few of them also monitors the controlling
activity done by a department, thus the aggregate entity. For
Fig. 2. Example of UDT notation in Chebotko diagram modeling using Chebotko methodology, the following
planned queries were chosen arbitrarily but we are confident
Lastly, they introduced their physical diagram notation that these queries are enough to represent real-world scenarios
which mainly adds each column respective data type, like int due to the inclusion of several types of queries, including
or list<text>. Their physical step also includes optional opti- aggregation, inequality search, and which ask numerous
mizations on certain tables according to the planned usage, in- information.
cluding duplicating certain columns and using indexes.
Q1: For a certain department, show its address alongside
C. Poffo Methodology its employees and their own jobs;
The second methodology discussed in this research is a Q2: For supervisors, show their names, the name of the
methodology introduced by [19] which is suitable for any employees that they supervise, and their employee type;
NoSQL columnar databases, not just for Apache Cassandra.
As explained before, this methodology depends only on its Q3: Show the amount of dependents of each employee;
EER diagrams and does not need any planned application Q4: For a project, show the information regarding the
queries. They introduced a unique logical notation for
employees who are working on it, including all types of
columnar databases, but no physical notation and also no
employees. Also, show the work experience of each
optimizations required, therefore implying that the database
employee, all phone numbers of them, and the project finished
developers know the best data type for each column and know
date;
how to optimize them. Fig. 3 shows the example logical
notation introduced by Poffo methodology. The bold letters Q5: Show all the projects and its supervising department
represent column families, hashtags represent primary keys, which its total cost >= x and its total duration >= y;
square brackets represent collections, tildes represent
mandatory columns, caret signs represent artificial relations Q6: Show all employees that monitor the controlling
which contain the value of primary keys of other columns in activity of a department;
said column family but only as a regular column, and 1:L Q7: For an employee, show his/her dependents.

239
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.
Fig. 4. Study case used for this research

A. Physical Notation of Study Case Made Using Chebotko


Methodology
Fig. 5 shows the physical notation of the study case con- Fig. 7. Logical notation of the study case using Poffo methodology
verted using Chebotko methodology; its logical notation is not
illustrated because it is similar to the physical notation. The C. Testing Environment
only physical step done is adding the respective data types
This research was conducted on two devices, which are a
because tuning databases would require specific knowledge.
laptop and a PC. The laptop is equipped with Intel Core i5-
Also, in employees_by_project table, several columns have to
1165G7 processor and RAM of 16 GB while the PC is equip-
use decimal data types because they contain null values. This
ped with Intel Core i5-10400 and RAM of 32 GB. Both
is due to the different types of employees listed on the same
devices run Ubuntu 20.04, Apache Cassandra 4.1.0 and cqlsh
table, but one employee can only become one type of them.
6.1.0, using NoSQLBench 5.17.12 as the benchmarking tool,
and the parameter of ‘consistency’ was set to all to ensure that
the read request was sent to all nodes involved. Both devices
were run on the same data center and the same rack.
D. Testing Scenarios
To conduct performance testing for both methodologies,
40 YAML files (20 pairs) to run the test were created although
there were 22 tables. This was because the controls_by
_employee and dependents_by_employee column families are
subset of monitors and dependents column families
respecttively so both column families were not retested. The
Fig. 5. Physical notation of the study case using Chebotko methodology YAML files contain queries that read exactly the same set of
columns for each pair in order to figure out the read
B. Logical and Physical Notation of Study Case Made performance of each methodology. The Chebotko YAML
Using Poffo Methodology files access the columns and column families according to the
Fig. 6 and Fig. 7 show both physical and logical notation physical model of the study case made using Chebotko
of the study case converted using Poffo methodology, but the methodology and vice versa. In this research, read time is the
physical model is illustrated using the Chebotko notation amount of time required for a YAML file that simulates
because Poffo methodology did not specify its own physical reading requests of 900 thousand users to finish reading a set
notation. The columns and column families which are of columns on corresponding physical model. Fig. 8 illustrates
highlighted in red are interpreted from the original research the testing scenarios. Both the Chebotko and Poffo YAML
but not exemplified in said research and from their source files were run on both devices five times each, totaling ten
code which URL link was acquired through contacting the times. The laptop and PC have different IP addresses. The first
authors through email. Attributes that are part of a composite until fifth trial were run in laptop whereas the rest were run in
attribute are given their own columns and there are no special PC. These tests were run simulating 900 thousand users
column types like static columns and counter columns, so because it was decided that the test data would have 900
according to this methodology, one has to use aggregate thousand employees; this is explained in the next chapter.
function to count the amount of occurrence of a particular
column.

Fig. 6. Physical notation of the study case using Poffo methodology but Fig. 8. Testing scenario in this research, involving ten trials for each YAML
illustrated using Chebotko notation diagram file

240
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.
IV. EXPERIMENTAL EVALUATION
A. Number of Rows of Each Column Family
The following shows the number of rows for each column
family. Note that the number of rows are different because
considering the real-world data, the column families which
contains the exact same number of rows are infrequent. In this
case, we started by deciding that there will be 900 thousand
employees, then we categorized those employees into three
types of employees, later we decided that there would be 9
thousand departments only, and lastly there would be only 10
thousand jobs in the dataset we were building.

TABLE I. COLUMN FAMILIES MADE AND USED IN THIS RESEARCH


Column Family Amount of Rows Methodology
works_for 900,000 Poffo
Fig. 10. Select query used to access employees column family in physical
job 10,000 Poffo model made using Chebotko methodology
supervise 900,000 Poffo
C. Overall Results
works_on 900,000 Poffo
Fig. 11 shows the overall results of reading performance
controls 9,000 Poffo for both methodologies from the sums of the average reading
employee_name 900,000 Poffo time of testing done ten times for each set of read queries.
Generally, queries ran more quickly on PC than laptop. From
permanent_employee 400,000 Poffo the perspective of overall reading time, physical model made
contract_employee 300,000 Poffo using Poffo methodology allowed queries to retrieve data 2.24
times more quickly than the ones made using Chebotko
casual_employee 200,000 Poffo
methodology. Observing the performance on each device,
dependents 900,000 Poffo queries ran on physical model made using Poffo methodology
monitors 9,000 Poffo finished 2.28 times more quickly on laptop and 2.20 times
more quickly on PC.
department 9,000 Poffo
department_address 9,000 Poffo Overall Reading Time According to the Average Time
(in Seconds/s)
project 900,000 Poffo
employee 900,000 Poffo Overall Reading Time

employees_by_department 900,000 Chebotko Reading Time on Laptop

employees_by_supervisor 900,000 Chebotko Reading Time on PC

num_dependents_by_employee 900,000 Chebotko - 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000

employees_by_project 900,000 Chebotko Poffo Methodology Chebotko Methodology


projects_by_department 9,000 Chebotko
Fig. 11. Overall reading time on both physical models of the specified case
controls_by_employee 9,000 Chebotko study and on both devices

dependents_by_employee 900,000 Chebotko


Fig. 12 and Fig. 13 show the average reading time on
laptop and Fig. 14 and Fig. 15 show the average reading time
on PC of queries executed on physical model made using
B. Example of The Query Used Poffo methodology. Physical model made using Poffo
Fig. 9 and Fig. 10 show both queries used to access the methodology caused queries to retrieve data of employee
columns of employee column family for physical models columns 30% more slowly on laptop and around 35% more
made using Poffo methodology and Chebotko methodology, slowly on PC, but significantly more quickly on job,
respectively. It is shown that in physical model made using department, department_address, and project columns with
Chebotko methodology, it would need three queries to access the amount of advantages are around 91%, 25%, 94%, and
all of those columns which also access three different column 25% on laptop and around 90%, 28%, 93%, and 26% on PC
families, but the results would show that this allows faster respectively. This is due to the different amount of rows and
reading performance as it read from smaller column families. columns that has to be read for each query. For reading
performance of accessing job columns, this is because on
physical model made using Poffo methodology, the job
column family could remain small with only 10 thousand rows
and 3 columns whereas on physical model made using
Chebotko methodology, the job columns were planned to be
accessed along with several other columns, therefore making
Fig. 9. Select query used to access employees column family in physical the corresponding column family (employees_by
model made using Poffo methodology _department) has 6 columns and 900 thousand rows,

241
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.
significantly reducing its reading performance. This would be
Average Reading Time according to Columns in the
the case for all the reading time differences of each set of Physical Model Made using Poffo Methodology on PC
columns. (in Seconds/s)

70
Average Reading Time according to the Columns in
60
the Physical Model Made using Poffo Methodology on
Laptop (in Seconds/s) 50
40
70 30
60 20
50 10
40 -
30
20
10
-

Chebotko Methodology Poffo Methodology

Fig. 16 and Fig. 17 show the average reading time of


queries executed on physical model made using Chebotko
Chebotko Methodology Poffo Methodology methodology on laptop and PC respectively with the reading
results of dependents_by_employee and controls_by_
Fig. 12. Average reading time of columns according to the physical model employee columns also included because these two column
made using Poffo methodology on laptops without job and
department_address columns families also exist in this physical model. Surprisingly, only
two column families on this physical model allow faster
reading time compared to the ones made using Poffo
methodology. This is due to the bigger corresponding column
Average Reading Time according to Columns in the
Physical Model Made using Poffo Methodology on families in the physical model made using Poffo methodology
Laptop (in Seconds/s) and even then, it is only slightly better. Also, using counter
data type does not necessarily improve its reading
performance, as shown by query read time shown by num_
department_address columns dependents_by_employee columns on both devices.

job columns Average Reading Time according to the Columns in


the Physical Model Made using Chebotko
- 100 200 300 400 500 600 700 800
Methodology on Laptop (in Seconds/s)

Poffo Methodology Chebotko Methodology


dependents_by_employee columns
Fig. 13. Average reading time of columns according to the physical model controls_by_employee columns
made using Poffo methodology on laptops for job and
department_address columns projects_by_department columns

employees_by_project columns
num_dependents_by_employee
Average Reading Time according to Columns in the columns
Physical Model Made using Poffo Methodology on PC employees_by_supervisor columns
(in Seconds/s)
employees_by_department columns

- 10 20 30 40 50 60
department_address columns
Poffo Methodology Chebotko Methodology

job columns
Fig. 16. Average reading time of columns according to the physical model
made using Chebotko methodology on laptop
- 100 200 300 400 500 600

Poffo Methodology Chebotko Methodology

Fig. 14. Average reading time of columns according to the physical model
made using Poffo methodology on PC for job and
department_address tables

Fig. 15. Average reading time of columns according to the physical model
made using Poffo methodology on PC without job and
department_address columns

242
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.
[2] K. Sahatqija, J. Ajdari, X. Zenuni, B. Raufi, and F. Ismaili,
Average Reading Time according to the Columns in "Comparison between relational and NOSQL databases," 2018 41st
the Physical Model Made using Chebotko International Convention on Information and Communication
Methodology on PC (in Seconds/s) Technology, Electronics and Microelectronics (MIPRO), Opatija,
Croatia, 2018, pp. 0216-0221.
dependents_by_employee columns [3] P. J. Sadalage and M. Fowler, NoSQL Distilled: A Brief Emerging
Guide to the Emerging World of Poyglot Persistence, Upper Saddle
controls_by_employee columns River, NJ, USA: Addison-Wesley, 2013.
[4] K. Kaur and R. Rani, "Modeling and querying data in NoSQL
projects_by_department columns
databases," 2013 IEEE International Conference on Big Data, Silicon
employees_by_project columns
Valley, CA, USA, 2013, pp. 1-7.
num_dependents_by_employee
[5] C. d. Lima and R. d. s. Mello, “A Workload-Driven Logical Design
columns Approach for NoSQL Document Databases”, iiWAS '15: Proceedings
of the 17th International Conference on Information Integration and
employees_by_supervisor columns Web-based Applications & Services, December 2015, Article No.: 73,
employees_by_department columns
pp. 1-10.
[6] S. Banerjee and A. Sarkar, “Modeling NoSQL Databases: From
- 10 20 30 40 50 Conceptual to Logical Level Design,” Conference: 3rd International
Conference on Applications and Innovations in Mobile Computing
Poffo Methodology Chebotko Methodology (AIMOC – 2016), February 2016.
[7] M. J. Mior, K. Salem, A. Aboulnaga and R. Liu, "NoSE: Schema
Fig. 17. Average reading time of columns according to the physical model design for NoSQL applications," 2016 IEEE 32nd International
made using Chebotko methodology on PC Conference on Data Engineering (ICDE), Helsinki, Finland, 2016, pp.
181-192.
V. CONCLUSION [8] S. Banerjee, S. Bhaskar, and A. Sarkar, “A Unified Conceptual Model
This work represents a comparative study of reading for Data Warehouses”, Annals of Emerging Technologies in
Computing, vol. 5, pp. 162-169.
performance between two NoSQL columnar database
[9] A. Chebotko, A. Kashlev and S. Lu, "A Big Data Modeling
methodologies which use two contrast approaches to data Methodology for Apache Cassandra," 2015 IEEE International
modeling on a chosen study case. The result of this research Congress on Big Data, New York, NY, USA, 2015, pp. 238-245.
shows that a data modeling methodology which heavily relies [10] J. P. Poffo and R. S. Mello, "A Logical Design Process for Columnar
on planned queries could instead impact performance Databases," ICIW 2016: The Eleventh International Conference on
negatively if not used carefully compared to the ones purely Internet and Web Applications and Services, pp. 29-38.
dependent on ER/EER diagram. This means that if one uses [11] Chen, P. P. S. "The entity-relationship model: a basis for the enterprise
Chebotko methodology to model its data, then he/she has to view of data." In Proceedings of the June 13-16, 1977, national
computer conference, pp. 77-84. 1977.
also remember to design queries effectively, balancing
between the amount of information inquired in a query and the [12] Smith, J.M. and Smith, D.C., 1977. Database abstractions: Aggregation
and generalization. ACM Transactions on Database Systems (TODS),
corresponding impact that would be inflicted on its reading 2(2), pp.105-133.
performance whereas one could just directly model its data [13] Elmasri, Ramez, J. Weeldreyer, and A. Hevner. "The category concept:
without having to worry about its performance later on if An extension to the entity-relationship model." Data & Knowledge
he/she uses Poffo methodology. Engineering 1, no. 1 (1985): 75-116.
[14] Navathe, S.B., Sashidhar, T. and Elmasri, R., 1984, August.
Currently, this work is only applied on Cassandra. But, it Relationship merging in schema integration. In Proceedings of the 10th
would be interesting if both methodologies, especially International Conference on Very Large Data Bases (pp. 78-90).
Chebotko methodology are also applied to other column- [15] Teorey, T.J., Yang, D. and Fry, J.P., 1986. A logical design
oriented databases such as HBase to review its versatility methodology for relational databases using the extended entity-
compared to other data modeling methodologies. Also, relationship model. ACM Computing Surveys (CSUR), 18(2), pp.197-
consequent data models should be optimized on physical 222.
levels, for example using bucketing and compression to [16] Dongqing, Y., Teorey, T. J., & Fry, J. P. (1987). A practical approach
to transforming extended ER diagrams into the relational model.
guarantee the best performance. The datasets used for both Information sciences, 42(2), 167-186.
methodologies should be larger and are also real datasets, if [17] Silberschatz, A., Korth, H.F. and Sudarshan, S., 2020. Database system
possible. The testing could be done on live environment if concepts 7th edition.
possible, but also could resemble a lot of users if using dummy [18] A. Chebotko, A. Kashlev and S. Lu, "A Big Data Modeling
datasets. Last but not least, future works related to this Methodology for Apache Cassandra," 2015 IEEE International
research could also test other aspects on databases, including Congress on Big Data, New York, NY, USA, 2015, pp. 238-245
create (select), update, and delete operations. [19] Poffo, J.P. and Mello, R.S., 2016. A Logical Design Process for
Columnar Databases. In The Eleventh International Conference on
REFERENCES Internet and Web Applications and Services.
[1] A. Nayak, A. Poriya, and D. Poojary. Article: Type of NOSQL [20] Elmasri, R., Navathe, S. B., Elmasri, R., & Navathe, S. B. (2015).
Databases and its Comparison with Relational Databases. International Fundamentals of Database Systems. In Advances in Databases and
Journal of Applied Information Systems 5(4):16-19, March 2013. Information Systems. Pearson. pp. 139.may result in your papin
your paper not being published

243
Authorized licensed use limited to: University of Melbourne. Downloaded on March 22,2024 at 02:38:53 UTC from IEEE Xplore. Restrictions apply.

You might also like