Inscrivez-vous ou connectez-vous pour rejoindre votre communauté professionnelle.
database normalization is to minimize the redundancy in the table by separating entities to achieve that each table describes single entitynormalization has many forms (stages) first,second ,third and fourth normalization
Normalization is a process of organizing the data in database to avoid data redundancy, insertion anomaly, update anomaly & deletion anomaly. Let’s discuss about anomalies first then we will discuss normal forms with examples.
Normalization is the process of splitting data into many tables in order to avoid data anomaly.
There are four types Normalization;
1) First Normal Form(1NF)
2) Second Normal Form(2NF)
3) Third Normal Form(3NF)
4) Boyce Codd Normal form(BCNF)
Database Normalisation is a technique of organizing the data in the database. Normalization is a systematic approach of decomposing tables to eliminate data redundancy and undesirable characteristics like Insertion, Update and Deletion Anamolies. It is a multi-step process that puts data into tabular form by removing duplicated data from the relation tables.Normalization is used for mainly two purpose,
Without Normalization, it becomes difficult to handle and update the database, without facing data loss. Insertion, Updation and Deletion Anamolies are very frequent if Database is not Normalized. To understand these anomalies let us take an example of Student table
Normalization rule are divided into following normal form.
As per First Normal Form, no two Rows of data must contain repeating group of information i.e each set of column must have a unique value, such that multiple columns cannot be used to fetch the same row. Each table should be organized into rows, and each row should have a primary key that distinguishes it as unique.
The Primary key is usually a single column, but sometimes more than one column can be combined to create a single primary key. For example consider a table which is not in First normal form
Student Table :
StudentAgeSubjectAdam 15 Biology, Maths Alex 14 Maths Stuart 17 Maths
In First Normal Form, any row must not have a column in which more than one value is saved, like separated with commas. Rather than that, we must separate such data into multiple rows.
Student Table following 1NF will be :
StudentAgeSubjectAdam 15 Biology Adam 15 Maths Alex 14 Maths Stuart 17 Maths
Using the First Normal Form, data redundancy increases, as there will be many columns with same data in multiple rows but each row as a whole will be unique.
Second Normal Form (2NF)
As per the Second Normal Form there must not be any partial dependency of any column on primary key. It means that for a table that has concatenated primary key, each column in the table that is not part of the primary key must depend upon the entire concatenated key for its existence. If any column depends only on one part of the concatenated key, then the table fails Second normal form.
In example of First Normal Form there are two rows for Adam, to include multiple subjects that he has opted for. While this is searchable, and follows First normal form, it is an inefficient use of space. Also in the above Table in First Normal Form, while the candidate key is {Student, Subject}, Age of Student only depends on Student column, which is incorrect as per Second Normal Form. To achieve second normal form, it would be helpful to split out the subjects into an independent table, and match them up using the student names as foreign keys.
New Student Table following 2NF will be :
StudentAgeAdam 15 Alex 14 Stuart 17
In Student Table the candidate key will be Student column, because all other column i.e Age is dependent on it.
New Subject Table introduced for 2NF will be :
StudentSubjectAdam Biology Adam Maths Alex Maths Stuart Maths
In Subject Table the candidate key will be {Student, Subject} column. Now, both the above tables qualifies for Second Normal Form and will never suffer from Update Anomalies. Although there are a few complex cases in which table in Second Normal Form suffers Update Anomalies, and to handle those scenarios Third Normal Form is there.
Third Normal form applies that every non-prime attribute of table must be dependent on primary key, or we can say that, there should not be the case that a non-prime attribute is determined by another non-prime attribute. So this transitive functional dependency should be removed from the table and also the table must be in Second Normal form. For example, consider a table with following fields.
Student_Detail Table :
Student_idStudent_nameDOBStreetcityStateZip
In this table Student_id is Primary key, but street, city and state depends upon Zip. The dependency between zip and other fields is called transitive dependency. Hence to apply 3NF, we need to move the street, city and state to new table, with Zip as primary key.
New Student_Detail Table :
Student_idStudent_nameDOBZip
Address Table :
ZipStreetcitystate
The advantage of removing transtive dependency is,
Boyce and Codd Normal Form is a higher version of the Third Normal form. This form deals with certain type of anamoly that is not handled by 3NF. A 3NF table which does not have multiple overlapping candidate keys is said to be in BCNF. For a table to be in BCNF, following conditions must be satisfied:
Database normalization is the process of organizing the columns (attributes) and tables (relations) of a relational database to reduce data redundancy and improve data integrity
Normalization is the process of splitting the bigger table into many small tables without changing its functionality.
It is generally carried out during the design phase of SDLC.
Advantages
1) it reduces the redundancy (unnecessary repeatation of data)
2) avoids problem due to delete anamoly (inconsistency)
Normalization is a step-by-step process and in each step, we have to perform some activities.
STEPS IN NORMALIZATION
1) 1NF – 1st Normal form
2) 2NF – 2nd Normal form
3) 3NF – 3rd Normal form
The process of organizing the columns (attributes) and tables (relations) of a relational database to reduce data redundancy and improve data integrity (“data integrity” refers to the accuracy and consistency of data stored in a database, data warehouse, data mart or other construct.
It is a process of organizzation data in database. The goals of normalization are eleminate redundant data and ensure the data dependencies. Both of these are reduce the amount of space in a database and ensure the data is logically stored.
Database normalization: is the process of organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency.There are three main reasons to normalize a database.The first is to minimize duplicate data, the second is to minimize or avoid data modification issues, and the third is to simplify queries.
we be doing the normalization for removing the data duplication and the redundancy for accuracy and to speedup our database ,
normally we normalize the data upto 4 levels of normalizations.
Database normalization is the process of organizing the columns (attributes) and tables (relations) of a relational database to reduce data redundancy and improve data integrity. Normalization is accomplished by applying some formal rules either by a process of synthesis or decomposition.
process of restructurion a relation data base
reduce data redundancy and improve integreity.