Top 10 SQL Server DBA Interview Questions By Deanna Dicken

Introduction

Last month I wrote an article on the questions I find most useful for interviewing a SQL Server developer. In this article, I'll cover the top 10 interview questions for SQL Server DBAs.

What purpose does the model database serve?

The model database, as its name implies, serves as the model (or template) for all databases created on the same instance. If the model database is modified, all subsequent databases created on that instance will pick up those changes, but earlier created databases will not. Note that TEMPDB is also created from model every time SQL Server starts up.

How do you trace the traffic hitting a SQL Server?

SQL profiler is the SQL Server utility you can use to trace the traffic on the SQL Server instance. Traces can be filtered to narrow down the transactions that are captured and reducing the overhead incurred for the trace. The trace files can be searched, saved off, and even replayed to facilitate troubleshooting.

What types of replication are supported in SQL Server?

SQL Server has three types of replication: Snapshot, Merge, and Transaction. Snapshot replication creates a snapshot of the data (point-in-time picture of the data) to deliver to the subscribers. This is a good type to use when the data changes infrequently, there is a small amount of data to replicate, or large changes occur over a small period of time. Merge replication uses a snapshot to seed the replication. Changes on both sides of the publication are tracked so the subscriber can synchronize with the publisher when connected. A typical use for this type of replication is in a client and server scenario. A server would act as a central repository and multiple clients would independently update their copies of the data until connected. At which time, they would all send up their modifications to the central store. Transaction replication also begins with a snapshot only this time changes are tracked as transactions (as the name implies). Changes are replicated from publisher to subscriber the same as they occurred on the publisher, in the same order as they occurred, and in near real time. This type of replication is useful when the subscriber needs to know every change that occurred to the data (not point-in-time), when the change volume is high, and when the subscriber needs near real-time access to the changes.

Why would you use SQL Agent?

SQL Agent is the job scheduling mechanism in SQL Server. Jobs can be scheduled to run at a set time or when a specific event occurs. Jobs can also be executed on demand. SQL Agent is most often used to schedule administrative jobs such as backups.

What happens on checkpoint?

Checkpoints, whether scheduled or manually executed, cause the transaction log to be truncated up to the beginning of the oldest open transaction (the active portion of the log). That is, the dirty pages from the buffer cache are written to disk. Storing committed transactions in the cache provides a performance gain for SQL Server. However, you do not want the transaction log to get too big because it might consume too many resources and, should your database fail, take too long to process to recover the database. One important thing to note here is that SQL Server can only truncate up to the oldest open transaction. Therefore, if you are not seeing the expected relief from a checkpoint, it could very well be that someone forgot to commit or rollback their transaction. It is very important to finalize all transactions as soon as possible.

What is DBCC?

DBCC statements are Database Console Commands and come in four flavors: Maintenance, Informational, Validation, and Miscellaneous. Maintenance commands are those commands that allow the DBA to perform maintenance activities on the database such as shrinking a file. Informational commands provide feedback regarding the database such as providing information about the procedure cache. Validation commands include commands that validate the database such as the ever-popular CHECKDB. Finally, miscellaneous commands are those that obviously don't fit in the other three categories. This includes statements like DBCC HELP, which provides the syntax for a given DBCC command.

How can you control the amount of free space in your index pages?

You can set the fill factor on your indexes. This tells SQL Server how much free space to leave in the index pages when re-indexing. The performance benefit here is fewer page splits (where SQL Server has to copy rows from one index page to another to make room for an inserted row) because there is room for growth built in to the index.

Why would you call Update Statistics?

Update Statistics is used to force a recalculation of query optimization statistics for a table or indexed view. Query optimization statistics are automatically recomputed, but in some cases, a query may benefit from updating those statistics more frequently. Beware though that re-computing the query statistics causes queries to be recompiled. This may or may not negate all performance gains you might have achieved by calling update statistics. In fact, it could have a negative impact on performance depending on the characteristics of the system.

What is a correlated sub-query?

A correlated sub-query is a nested query that is linked to the outer query. For instance, say I wanted to find all the employees who have not entered their time for the week. I could query the Employee table to get their first and last name, but I need to look at the TimeEntry table to see if they've entered their time or not. I can't do a straight join here because I'm looking for the absence of time data, so I'll do a correlated sub-query similar to this: SELECT FirstName, LastName FROM EMPLOYEE e WHERE NOT EXISTS (SELECT 1 FROM TimeEntry te WHERE te.EmpID = e.EmpID AND te.WeekID = 35) Notice that the inner query relates to the outer query on the employee ID, thus making it a correlated sub-query. The inner query will be evaluated once per outer query row.

What authentication modes does SQL Server support?

SQL Server supports Windows Authentication, SQL Server Authentication, and mixed-mode. Mixed-mode allows you to use both Windows Authentication and SQL Server Authentication to log into your SQL Server. It's important to note that if you use Windows Authentication, you will not be able to log in as sa.

Conclusion

In this article, I list the top 10 DBA interview question (as I see it anyway). I would, however, encourage you to also ask the DBA the SQL Server Developer questions from my previous article. As I said in that article though, every workplace and project has different needs. I hope you found at least a few that you can use in yours. http://www.databasejournal.com/features/mssql/article.php/3905461/article.htm

Database Fundamentals





The Following are Basic Facts about databases.


Database Terms
  • Attribute
  • Cardinality
  • Data Dictionary
  • DBMS Engine
  • Design Tools
  • Attribute's Domain
  • Entity
  • Entity Class
  • Father of Relational Databases
  • Foreign Key



  • Hierarchy of Data Elements
  • Meta Data
  • Overhead Data
  • Primary Key
  • Relation
  • Relational Database
  • Runtime Subsystem
  • Schema
  • Transactions
  • User Data

ATTRIBUTE
An attribute is another word for field. In spreadsheet language it would be a cell. It is a place in a database table to store one piece of data of a given type. For example an attribute designated to hold a last_name, could hold "Smith", but should not hold "Amy Smith".

CARDINALITY
Cardinality is a way to express minimum value and maximum value which are governed by the business rules. Cardinality refers to the required number of instances an entity must have in order to make the in a relationship in order for it to be valid. Minimum cardinality then for a one to many relationship would be one. Minimum cardinality for a basketball team would be 5, or you would be forced to forfeit the game. Maximum cardinality is the maximum number of entities which can occur in a relationship in order for it to be valid. In a one to one relationship the maximum cardinality would also be one. For a baseball team, during the normal season, the maximum cardinality would be 25 active players on the roster.

THE DATA DICTIONARY
A database is self describing. By this we mean it documents itself through table structure outputs. One of the components of the data dictionary is the table data type layout.

You can easily see how the fields are defined. The data types, lengths of the fields, and if they can be null or not. This is just one example of the data dictionary information provided by a DBMS.

THE DBMS ENGINE
This is a component of the DBMS (Database Management System) which is the intermediary between the design tools and run-time sub-systems and the data. The DBMS engine receives requests form the other two components, which is presented in column and row format, translates them into commands which are passed to the operating system in order to provide read and write functions to the disk.

THE DESIGN TOOLS
One of the three components of a DBMS. This subsystem provides the tools to assist users and programmers in creating and modifying components of the database. such components are, Tables, Queries, Reports, and User Forms. Many DBMS products provide a programming environment to create databases which perform very complex tasks.

Attribute's DOMAIN
The domain of an attribute is the range of data it can contain. This is not to say the attribute can contain the entire range at one time. An attributes contents must be atomic, meaning they must be of a single bit of information about the theme of the record. For example an attribute named "JOB_TITLE", from the EMPLOYEES table, could contain values from "Machine Operator", "Driver", "Foreman","Shift Manager", all the way up to "President". It can only hold one of these per record at a time. An attribute designated for "JOB_TITLE" cannot hold any other type data, such as Salary, or Date_Of_Hire. Can you imagine having to look for the Date_Of_Hire somewhere in a table, but having no specified place? You might as well be searching text files again.

ENTITY
A entity is something that someone wants to track. An employee for example. It is basically the subject for a table. You gather data about the employee, you run queries to find out information about them, track their time, vacation, sick days etc... Therefore, an entity is very much the same as a record in a table.relational databases was E.F. Codd, who worked for IBM at that time. He published a paper titled "A Relational Model of Data for Large Shared Databanks" in June of 1970.

ENTITY CLASS
A entity class is a collection of entities, as defined by their structure. There are usually many entities in an entity class, all of the same structure and type. In my mind, an entity class is the table which contains the entities.

Father of Relational Databases.
The father of

FOREIGN KEY
A foreign key is the same data field and type which is linked to a primary key in a corresponding table. For example in a transaction table The Customer_ID would be the Foreign Key field. The Foreign Key is used to look up the Customer_ID in the customer table where the Customer_ID is the primary key.

The Hierarchy of Data Elements in:
  • A file processing system

    1. Bits->Bytes or Characters->Fields->Records->Files
  • A Database system

    1. Bits->Bytes or Characters->Fields->Records->Files+Metadata + Indexes + App MetaData.
META DATA
Meta data is the data about data. In the above example concerning the DATA DICTIONARY. Is an example of meta data. It is the self describing part of a database. Information such as the table names, user names, data types, and field sizes are all meta data, describing the database.

OVERHEAD DATA
Overhead data is that which the system uses for itself. Indexes for example are overhead data. This is because the system uses indexes to speed searches, and to aid in joins. The overhead part comes in that this data also consumes processing time, and resources. Each time you update a table, the index must also be updated, which takes a bit of processing time but it also speeds up the search capability. You have to decide if the price in resources is worth the benefit of speed in creating and maintaining an index.

PRIMARY KEY
The primary key is that field, or fields, which by itself, or together uniquely identify each row in a table. The Primary Key is usually indexed, in some systems that is required. The primary key is normally the field or combined fields by which joins are linked. All data within each row or record should be dependent on the  entirety of the primary key. Primary Keys are used to normalize data tables.

A RELATION
This is a table which, as one of its attributes has a unique identifier for each of it's records, also known as a primary key. In most cases, the primary key is indexed to enhance performance of the system by speeding the lookup capabilities of the DBMS.

In the above example you can see the Product_ID is unique for each item. This entire table, with the unique field is called a relation.


A RELATIONAL DATABASE
Relational database we entails some forms of data relationship. It gets its name for it's relation to other tables within the database. A relational database is set up so that the a key is presented in two or more tables. In one table it will be the primary key, however, in the other table it will be the foreign key. Where the primary key matches the foreign key is where the relationship occurs. You may have a one to one relation where only one of each key members can be present in each table. You may also have a one to many relationship, where only one member can exist in one table but many occurrences can be present in the other table. An finally you can have a many to many relationship, where many occurrences can be present in both tables. Below is an example of a one to many relationship. There is only one occurrence of the Product_id in the products table, But many occurrences can exist in the transaction table.layout of the tables, attribute types and sizes, which fields are indexed, the relationships, domains, and business rules concerning a database. It is the design from which the database as well as its application programs were built. In a nutshell, the schema encompasses everything about the database.entire function. For example a sales transaction. You need the following.

THE RUNTIME SUBSYSTEM
This subsystem processes the application components that are developed using the design tools. For example Access has a runtime component that links data to forms, and reports. This is just part of the DBMS. The user or the developer need be concerned with how it works. When a given form is opened the runtime subsystem opens the required tables extracts the data and displays it to the user. There is also a component that facilitates the read and write requests for the applications.

SCHEMA
The SCHEMA is the design of the database, and why it was created. The schema is the



TRANSACTIONS
Transactions are a group of sql statements which work together to perform an
  1. A statement to add a record to the transaction table.
  2. A statement update the Inventory Table.
  3. A statement update the customer table. If necessary.
  4. A statement to commit the data.
One of two things MUST happen. All of these statements must work together to accomplish their goal, or none of the statements work. That is the key to transaction processing, all or nothing. Log files are kept by the system to record what has been accomplished so, in the event something goes wrong, we know where to start. This is a way of maintaining the integrity of our data.

USER DATA
User data is just as the name implies. User data is the data which the user enters into the database tables.

Introduction to Relational Database Management System

Ten Common Database Design Mistakes

Louis Davidson
No list of mistakes is ever going to be exhaustive. People (myself included) do a lot of really stupid things, at times, in the name of "getting it done." This list simply reflects the database design mistakes that are currently on my mind, or in some cases, constantly on my mind. I have done this topic two times before. If you're interested in hearing the podcast version, visit Greg Low's super-excellent SQL Down Under. I also presented a boiled down, ten-minute version at PASS for the Simple-Talk booth. Originally there were ten, then six, and today back to ten. And these aren't exactly the same ten that I started with; these are ten that stand out to me as of today.
Before I start with the list, let me be honest for a minute. I used to have a preacher who made sure to tell us before some sermons that he was preaching to himself as much as he was to the congregation. When I speak, or when I write an article, I have to listen to that tiny little voice in my head that helps filter out my own bad habits, to make sure that I am teaching only the best practices. Hopefully, after reading this article, the little voice in your head will talk to you when you start to stray from what is right in terms of database design practices.
So, the list:
  1. Poor design/planning
  2. Ignoring normalization
  3. Poor naming standards
  4. Lack of documentation
  5. One table to hold all domain values
  6. Using identity/guid columns as your only key
  7. Not using SQL facilities to protect data integrity
  8. Not using stored procedures to access data
  9. Trying to build generic objects
  1. Lack of testing

Poor design/planning

"If you don't know where you are going, any road will take you there" – George Harrison
Prophetic words for all parts of life and a description of the type of issues that plague many projects these days.
Let me ask you: would you hire a contractor to build a house and then demand that they start pouring a foundation the very next day? Even worse, would you demand that it be done without blueprints or house plans? Hopefully, you answered "no" to both of these. A design is needed make sure that the house you want gets built, and that the land you are building it on will not sink into some underground cavern. If you answered yes, I am not sure if anything I can say will help you.
Like a house, a good database is built with forethought, and with proper care and attention given to the needs of the data that will inhabit it; it cannot be tossed together in some sort of reverse implosion.
Since the database is the cornerstone of pretty much every business project, if you don't take the time to map out the needs of the project and how the database is going to meet them, then the chances are that the whole project will veer off course and lose direction. Furthermore, if you don't take the time at the start to get the database design right, then you'll find that any substantial changes in the database structures that you need to make further down the line could have a huge impact on the whole project, and greatly increase the likelihood of the project timeline slipping.
Far too often, a proper planning phase is ignored in favor of just "getting it done". The project heads off in a certain direction and when problems inevitably arise – due to the lack of proper designing and planning – there is "no time" to go back and fix them properly, using proper techniques. That's when the "hacking" starts, with the veiled promise to go back and fix things later, something that happens very rarely indeed.
Admittedly it is impossible to predict every need that your design will have to fulfill and every issue that is likely to arise, but it is important to mitigate against potential problems as much as possible, by careful planning.

Ignoring Normalization

Normalization defines a set of methods to break down tables to their constituent parts until each table represents one and only one "thing", and its columns serve to fully describe only the one "thing" that the table represents.
The concept of normalization has been around for 30 years and is the basis on which SQL and relational databases are implemented. In other words, SQL was created to work with normalized data structures. Normalization is not just some plot by database programmers to annoy application programmers (that is merely a satisfying side effect!)
SQL is very additive in nature in that, if you have bits and pieces of data, it is easy to build up a set of values or results. In the FROM clause, you take a set of data (a table) and add (JOIN) it to another table. You can add as many sets of data together as you like, to produce the final set you need.
This additive nature is extremely important, not only for ease of development, but also for performance. Indexes are most effective when they can work with the entire key value. Whenever you have to use SUBSTRING, CHARINDEX, LIKE, and so on, to parse out a value that is combined with other values in a single column (for example, to split the last name of a person out of a full name column) the SQL paradigm starts to break down and data becomes become less and less searchable.
So normalizing your data is essential to good performance, and ease of development, but the question always comes up: "How normalized is normalized enough?" If you have read any books about normalization, then you will have heard many times that 3rd Normal Form is essential, but 4th and 5th Normal Forms are really useful and, once you get a handle on them, quite easy to follow and well worth the time required to implement them.
In reality, however, it is quite common that not even the first Normal Form is implemented correctly.
Whenever I see a table with repeating column names appended with numbers, I cringe in horror. And I cringe in horror quite often. Consider the following example Customer table:

Are there always 12 payments? Is the order of payments significant? Does a NULL value for a payment mean UNKNOWN (not filled in yet), or a missed payment? And when was the payment made?!?
A payment does not describe a Customer and should not be stored in the Customer table. Details of payments should be stored in a Payment table, in which you could also record extra information about the payment, like when the payment was made, and what the payment was for:

In this second design, each column stores a single unit of information about a single "thing" (a payment), and each row represents a specific instance of a payment.
This second design is going to require a bit more code early in the process but, it is far more likely that you will be able to figure out what is going on in the system without having to hunt down the original programmer and kick their butt…sorry… figure out what they were thinking

Poor naming standards

"That which we call a rose, by any other name would smell as sweet"
This quote from Romeo and Juliet by William Shakespeare sounds nice, and it is true from one angle. If everyone agreed that, from now on, a rose was going to be called dung, then we could get over it and it would smell just as sweet. The problem is that if, when building a database for a florist, the designer calls it dung and the client calls it a rose, then you are going to have some meetings that sound far more like an Abbott and Costello routine than a serious conversation about storing information about horticulture products.
Names, while a personal choice, are the first and most important line of documentation for your application. I will not get into all of the details of how best to name things here– it is a large and messy topic. What I want to stress in this article is the need for consistency. The names you choose are not just to enable you to identify the purpose of an object, but to allow all future programmers, users, and so on to quickly and easily understand how a component part of your database was intended to be used, and what data it stores. No future user of your design should need to wade through a 500 page document to determine the meaning of some wacky name.
Consider, for example, a column named, X304_DSCR. What the heck does that mean? You might decide, after some head scratching, that it means "X304 description". Possibly it does, but maybe DSCR means discriminator, or discretizator?
Unless you have established DSCR as a corporate standard abbreviation for description, then X304_DESCRIPTION is a much better name, and one leaves nothing to the imagination.
That just leaves you to figure out what the X304 part of the name means. On first inspection, to me, X304 sounds like more like it should be data in a column rather than a column name. If I subsequently found that, in the organization, there was also an X305 and X306 then I would flag that as an issue with the database design. For maximum flexibility, data is stored in columns, not in column names.
Along these same lines, resist the temptation to include "metadata" in an object's name. A name such as tblCustomer or colVarcharAddress might seem useful from a development perspective, but to the end user it is just confusing. As a developer, you should rely on being able to determine that a table name is a table name by context in the code or tool, and present to the users clear, simple, descriptive names, such as Customer and Address.
A practice I strongly advise against is the use of spaces and quoted identifiers in object names. You should avoid column names such as "Part Number" or, in Microsoft style, [Part Number], therefore requiring you users to include these spaces and identifiers in their code. It is annoying and simply unnecessary.
Acceptable alternatives would be part_number, partNumber or PartNumber. Again, consistency is key. If you choose PartNumber then that's fine – as long as the column containing invoice numbers is called InvoiceNumber, and not one of the other possible variations.

Lack of documentation

I hinted in the intro that, in some cases, I am writing for myself as much as you. This is the topic where that is most true. By carefully naming your objects, columns, and so on, you can make it clear to anyone what it is that your database is modeling. However, this is only step one in the documentation battle. The unfortunate reality is, though, that "step one" is all too often the only step.
Not only will a well-designed data model adhere to a solid naming standard, it will also contain definitions on its tables, columns, relationships, and even default and check constraints, so that it is clear to everyone how they are intended to be used. In many cases, you may want to include sample values, where the need arose for the object, and anything else that you may want to know in a year or two when "future you" has to go back and make changes to the code.
NOTE:
Where this documentation is stored is largely a matter of corporate standards and/or convenience to the developer and end users. It could be stored in the database itself, using extended properties. Alternatively, it might be in maintained in the data modeling tools. It could even be in a separate data store, such as Excel or another relational database. My company maintains a metadata repository database, which we developed in order to present this data to end users in a searchable, linkable format. Format and usability is important, but the primary battle is to have the information available and up to date.
Your goal should be to provide enough information that when you turn the database over to a support programmer, they can figure out your minor bugs and fix them (yes, we all make bugs in our code!). I know there is an old joke that poorly documented code is a synonym for "job security." While there is a hint of truth to this, it is also a way to be hated by your coworkers and never get a raise. And no good programmer I know of wants to go back and rework their own code years later. It is best if the bugs in the code can be managed by a junior support programmer while you create the next new thing. Job security along with raises is achieved by being the go-to person for new challenges.

One table to hold all domain values

"One Ring to rule them all and in the darkness bind them"
This is all well and good for fantasy lore, but it's not so good when applied to database design, in the form of a "ruling" domain table. Relational databases are based on the fundamental idea that every object represents one and only one thing. There should never be any doubt as to what a piece of data refers to. By tracing through the relationships, from column name, to table name, to primary key, it should be easy to examine the relationships and know exactly what a piece of data means.
The big myth perpetrated by architects who don't really understand relational database architecture (me included early in my career) is that the more tables there are, the more complex the design will be. So, conversely, shouldn't condensing multiple tables into a single "catch-all" table simplify the design? It does sound like a good idea, but at one time giving Pauly Shore the lead in a movie sounded like a good idea too.
For example, consider the following model snippet where I needed domain values for:
  • Customer CreditStatus
  • Customer Type
  • Invoice Status
  • Invoice Line Item BackOrder Status
  • Invoice Line Item Ship Via Carrier
On the face of it that would be five domain tables…but why not just use one generic domain table, like this?

This may seem a very clean and natural way to design a table for all but the problem is that it is just not very natural to work with in SQL. Say we just want the domain values for the Customer table:
 
SELECT * FROM Customer   JOIN GenericDomain as CustomerType     ON Customer.CustomerTypeId = CustomerType.GenericDomainId       and CustomerType.RelatedToTable = 'Customer'       and  CustomerType.RelatedToColumn = 'CustomerTypeId'   JOIN GenericDomain as CreditStatus     ON  Customer.CreditStatusId = CreditStatus.GenericDomainId       and CreditStatus.RelatedToTable = 'Customer'       and CreditStatus.RelatedToColumn = ' CreditStatusId'
As you can see, this is far from being a natural join. It comes down to the problem of mixing apples with oranges. At first glance, domain tables are just an abstract concept of a container that holds text. And from an implementation centric standpoint, this is quite true, but it is not the correct way to build a database. In a database, the process of normalization, as a means of breaking down and isolating data, takes every table to the point where one row represents one thing. And each domain of values is a distinctly different thing from all of the other domains (unless it is not, in which case the one table will suffice.). So what you do, in essence, is normalize the data on each usage, spreading the work out over time, rather than doing the task once and getting it over with.
So instead of the single table for all domains, you might model it as:

Looks harder to do, right? Well, it is initially. Frankly it took me longer to flesh out the example tables. But, there are quite a few tremendous gains to be had:
  • Using the data in a query is much easier:
SELECT *
FROM Customer
  JOIN CustomerType
    ON Customer.CustomerTypeId = CustomerType.CustomerTypeId
  JOIN CreditStatus
    ON  Customer.CreditStatusId = CreditStatus.CreditStatusId 
  • Data can be validated using foreign key constraints very naturally, something not feasible for the other solution unless you implement ranges of keys for every table – a terrible mess to maintain.
  • If it turns out that you need to keep more information about a ShipViaCarrier than just the code, 'UPS', and description, 'United Parcel Service', then it is as simple as adding a column or two. You could even expand the table to be a full blown representation of the businesses that are carriers for the item.
  • All of the smaller domain tables will fit on a single page of disk. This ensures a single read (and likely a single page in cache). If the other case, you might have your domain table spread across many pages, unless you cluster on the referring table name, which then could cause it to be more costly to use a non-clustered index if you have many values.
  • You can still have one editor for all rows, as most domain tables will likely have the same base structure/usage. And while you would lose the ability to query all domain values in one query easily, why would you want to? (A union query could easily be created of the tables easily if needed, but this would seem an unlikely need.)
I should probably rebut the thought that might be in your mind. "What if I need to add a new column to all domain tables?" For example, you forgot that the customer wants to be able to do custom sorting on domain values and didn't put anything in the tables to allow this. This is a fair question, especially if you have 1000 of these tables in a very large database. First, this rarely happens, and when it does it is going to be a major change to your database in either way.
Second, even if this became a task that was required, SQL has a complete set of commands that you can use to add columns to tables, and using the system tables it is a pretty straightforward task to build a script to add the same column to hundreds of tables all at once. That will not be as easy of a change, but it will not be so much more difficult to outweigh the large benefits.
The point of this tip is simply that it is better to do the work upfront, making structures solid and maintainable, rather than trying to attempt to do the least amount of work to start out a project. By keeping tables down to representing one "thing" it means that most changes will only affect one table, after which it follows that there will be less rework for you down the road.

Using identity/guid columns as your only key

First Normal Form dictates that all rows in a table must be uniquely identifiable. Hence, every table should have a primary key. SQL Server allows you to define a numeric column as an IDENTITY column, and then automatically generates a unique value for each row. Alternatively, you can use NEWID() (or NEWSEQUENTIALID()) to generate a random, 16 byte unique value for each row. These types of values, when used as keys, are what are known as surrogate keys. The word surrogate means "something that substitutes for" and in this case, a surrogate key should be the stand-in for a natural key.
The problem is that too many designers use a surrogate key column as the only key column on a given table. The surrogate key values have no actual meaning in the real world; they are just there to uniquely identify each row.
Now, consider the following Part table, whereby PartID is an IDENTITY column and is the primary key for the table:
 
PartID PartNumber Description
1 XXXXXXXX The X part
2 XXXXXXXX The X part
3 YYYYYYYY The Y part
How many rows are there in this table? Well, there seem to be three, but are rows with PartIDs 1 and 2 actually the same row, duplicated? Or are they two different rows that should be unique but were keyed in incorrectly?
The rule of thumb I use is simple. If a human being could not pick which row they want from a table without knowledge of the surrogate key, then you need to reconsider your design. This is why there should be a key of some sort on the table to guarantee uniqueness, in this case likely on PartNumber.
In summary: as a rule, each of your tables should have a natural key that means something to the user, and can uniquely identify each row in your table. In the very rare event that you cannot find a natural key (perhaps, for example, a table that provides a log of events), then use an artificial/surrogate key.

Not using SQL facilities to protect data integrity

All fundamental, non-changing business rules should be implemented by the relational engine. The base rules of nullability, string length, assignment of foreign keys, and so on, should all be defined in the database.
There are many different ways to import data into SQL Server. If your base rules are defined in the database itself can you guarantee that they will never be bypassed and you can write your queries without ever having to worry whether the data you're viewing adheres to the base business rules.
Rules that are optional, on the other hand, are wonderful candidates to go into a business layer of the application. For example, consider a rule such as this: "For the first part of the month, no part can be sold at more than a 20% discount, without a manager's approval".
Taken as a whole, this rule smacks of being rather messy, not very well controlled, and subject to frequent change. For example, what happens when next week the maximum discount is 30%? Or when the definition of "first part of the month" changes from 15 days to 20 days? Most likely you won't want go through the difficulty of implementing these complex temporal business rules in SQL Server code – the business layer is a great place to implement rules like this.
However, consider the rule a little more closely. There are elements of it that will probably never change. E.g.
  • The maximum discount it is ever possible to offer
  • The fact that the approver must be a manager
These aspects of the business rule very much ought to get enforced by the database and design. Even if the substance of the rule is implemented in the business layer, you are still going to have a table in the database that records the size of the discount, the date it was offered, the ID of the person who approved it, and so on. On the Discount column, you should have a CHECK constraint that restricts the values allowed in this column to between 0.00 and 0.90 (or whatever the maximum is). Not only will this implement your "maximum discount" rule, but will also guard against a user entering a 200% or a negative discount by mistake. On the ManagerID column, you should place a foreign key constraint, which reference the Managers table and ensures that the ID entered is that of a real manager (or, alternatively, a trigger that selects only EmployeeIds corresponding to managers).
Now, at the very least we can be sure that the data meets the very basic rules that the data must follow, so we never have to code something like this in order to check that the data is good:
 SELECT CASE WHEN discount < 0 then 0 else WHEN discount > 1 then 1…
We can feel safe that data meets the basic criteria, every time.

Not using stored procedures to access data

Stored procedures are your friend. Use them whenever possible as a method to insulate the database layer from the users of the data. Do they take a bit more effort? Sure, initially, but what good thing doesn't take a bit more time? Stored procedures make database development much cleaner, and encourage collaborative development between your database and functional programmers. A few of the other interesting reasons that stored procedures are important include the following.

Maintainability

Stored procedures provide a known interface to the data, and to me, this is probably the largest draw. When code that accesses the database is compiled into a different layer, performance tweaks cannot be made without a functional programmer's involvement. Stored procedures give the database professional the power to change characteristics of the database code without additional resource involvement, making small changes, or large upgrades (for example changes to SQL syntax) easier to do.

Encapsulation

Stored procedures allow you to "encapsulate" any structural changes that you need to make to the database so that the knock on effect on user interfaces is minimized. For example, say you originally modeled one phone number, but now want an unlimited number of phone numbers. You could leave the single phone number in the procedure call, but store it in a different table as a stopgap measure, or even permanently if you have a "primary" number of some sort that you always want to display. Then a stored proc could be built to handle the other phone numbers. In this manner the impact to the user interfaces could be quite small, while the code of stored procedures might change greatly.

Security

Stored procedures can provide specific and granular access to the system. For example, you may have 10 stored procedures that all update table X in some way. If a user needs to be able to update a particular column in a table and you want to make sure they never update any others, then you can simply grant to that user the permission to execute just the one procedure out of the ten that allows them perform the required update.

Performance

There are a couple of reasons that I believe stored procedures enhance performance. First, if a newbie writes ratty code (like using a cursor to go row by row through an entire ten million row table to find one value, instead of using a WHERE clause), the procedure can be rewritten without impact to the system (other than giving back valuable resources.) The second reason is plan reuse. Unless you are using dynamic SQL calls in your procedure, SQL Server can store a plan and not need to compile it every time it is executed. It's true that in every version of SQL Server since 7.0 this has become less and less significant, as SQL Server gets better at storing plans ad hoc SQL calls (see note below). However, stored procedures still make it easier for plan reuse and performance tweaks. In the case where ad hoc SQL would actually be faster, this can be coded into the stored procedure seamlessly.
In 2005, there is a database setting (PARAMETERIZATION FORCED) that, when enabled, will cause all queries to have their plans saved. This does not cover more complicated situations that procedures would cover, but can be a big help. There is also a feature known as plan guides, which allow you to override the plan for a known query type. Both of these features are there to help out when stored procedures are not used, but stored procedures do the job with no tricks.
And this list could go on and on. There are drawbacks too, because nothing is ever perfect. It can take longer to code stored procedures than it does to just use ad hoc calls. However, the amount of time to design your interface and implement it is well worth it, when all is said and done.

Trying to code generic T-SQL objects

I touched on this subject earlier in the discussion of generic domain tables, but the problem is more prevalent than that. Every new T-SQL programmer, when they first start coding stored procedures, starts to think "I wish I could just pass a table name as a parameter to a procedure." It does sound quite attractive: one generic stored procedure that can perform its operations on any table you choose. However, this should be avoided as it can be very detrimental to performance and will actually make life more difficult in the long run.
T-SQL objects do not do "generic" easily, largely because lots of design considerations in SQL Server have clearly been made to facilitate reuse of plans, not code. SQL Server works best when you minimize the unknowns so it can produce the best plan possible. The more it has to generalize the plan, the less it can optimize that plan.
Note that I am not specifically talking about dynamic SQL procedures. Dynamic SQL is a great tool to use when you have procedures that are not optimizable / manageable otherwise. A good example is a search procedure with many different choices. A precompiled solution with multiple OR conditions might have to take a worst case scenario approach to the plan and yield weak results, especially if parameter usage is sporadic.
However, the main point of this tip is that you should avoid coding very generic objects, such as ones that take a table name and twenty column names/value pairs as a parameter and lets you update the values in the table. For example, you could write a procedure that started out:
 
CREATE PROCEDURE updateAnyTable @tableName sysname, @columnName1 sysname, @columnName1Value varchar(max) @columnName2 sysname, @columnName2Value varchar(max) …
The idea would be to dynamically specify the name of a column and the value to pass to a SQL statement. This solution is no better than simply using ad hoc calls with an UPDATE statement. Instead, when building stored procedures, you should build specific, dedicated stored procedures for each task performed on a table (or multiple tables.) This gives you several benefits:
  • Properly compiled stored procedures can have a single compiled plan attached to it and reused.
  • Properly compiled stored procedures are more secure than ad-hoc SQL or even dynamic SQL procedures, reducing the surface area for an injection attack greatly because the only parameters to queries are search arguments or output values.
  • Testing and maintenance of compiled stored procedures is far easier to do since you generally have only to search arguments, not that tables/columns/etc exist and handling the case where they do not
A nice technique is to build a code generation tool in your favorite programming language (even T-SQL) using SQL metadata to build very specific stored procedures for every table in your system. Generate all of the boring, straightforward objects, including all of the tedious code to perform error handling that is so essential, but painful to write more than once or twice.
In my Apress book, Pro SQL Server 2005 Database Design and Optimization, I provide several such "templates" (manly for triggers, abut also stored procedures) that have all of the error handling built in, I would suggest you consider building your own (possibly based on mine) to use when you need to manually build a trigger/procedure or whatever.

Lack of testing

When the dial in your car says that your engine is overheating, what is the first thing you blame? The engine. Why don't you immediately assume that the dial is broken? Or something else minor? Two reasons:
  • The engine is the most important component of the car and it is common to blame the most important part of the system first.
  • It is all too often true.
As database professionals know, the first thing to get blamed when a business system is running slow is the database. Why? First because it is the central piece of most any business system, and second because it also is all too often true.
We can play our part in dispelling this notion, by gaining deep knowledge of the system we have created and understanding its limits through testing.
But let's face it; testing is the first thing to go in a project plan when time slips a bit. And what suffers the most from the lack of testing? Functionality? Maybe a little, but users will notice and complain if the "Save" button doesn't actually work and they cannot save changes to a row they spent 10 minutes editing. What really gets the shaft in this whole process is deep system testing to make sure that the design you (presumably) worked so hard on at the beginning of the project is actually implemented correctly.
But, you say, the users accepted the system as working, so isn't that good enough? The problem with this statement is that what user acceptance "testing" usually amounts to is the users poking around, trying out the functionality that they understand and giving you the thumbs up if their little bit of the system works. Is this reasonable testing? Not in any other industry would this be vaguely acceptable. Do you want your automobile tested like this? "Well, we drove it slowly around the block once, one sunny afternoon with no problems; it is good!" When that car subsequently "failed" on the first drive along a freeway, or during the first drive through rain or snow, then the driver would have every right to be very upset.
Too many database systems get tested like that car, with just a bit of poking around to see if individual queries and modules work. The first real test is in production, when users attempt to do real work. This is especially true when it is implemented for a single client (even worse when it is a corporate project, with management pushing for completion more than quality).
Initially, major bugs come in thick and fast, especially performance related ones. If the first time you have tried a full production set of users, background process, workflow processes, system maintenance routines, ETL, etc, is on your system launch day, you are extremely likely to discover that you have not anticipated all of the locking issues that might be caused by users creating data while others are reading it, or hardware issues cause by poorly set up hardware. It can take weeks to live down the cries of "SQL Server can't handle it" even after you have done the proper tuning.
Once the major bugs are squashed, the fringe cases (which are pretty rare cases, like a user entering a negative amount for hours worked) start to raise their ugly heads. What you end up with at this point is software that irregularly fails in what seem like weird places (since large quantities of fringe bugs will show up in ways that aren't very obvious and are really hard to find.)
Now, it is far harder to diagnose and correct because now you have to deal with the fact that users are working with live data and trying to get work done. Plus you probably have a manager or two sitting on your back saying things like "when will it be done?" every 30 seconds, even though it can take days and weeks to discover the kinds of bugs that result in minor (yet important) data aberrations. Had proper testing been done, it would never have taken weeks of testing to find these bugs, because a proper test plan takes into consideration all possible types of failures, codes them into an automated test, and tries them over and over. Good testing won't find all of the bugs, but it will get you to the point where most of the issues that correspond to the original design are ironed out.
If everyone insisted on a strict testing plan as an integral and immutable part of the database development process, then maybe someday the database won't be the first thing to be fingered when there is a system slowdown.

Summary

Database design and implementation is the cornerstone of any data centric project (read 99.9% of business applications) and should be treated as such when you are developing. This article, while probably a bit preachy, is as much a reminder to me as it is to anyone else who reads it. Some of the tips, like planning properly, using proper normalization, using a strong naming standards and documenting your work– these are things that even the best DBAs and data architects have to fight to make happen. In the heat of battle, when your manager's manager's manager is being berated for things taking too long to get started, it is not easy to push back and remind them that they pay you now, or they pay you later. These tasks pay dividends that are very difficult to quantify, because to quantify success you must fail first. And even when you succeed in one area, all too often other minor failures crop up in other parts of the project so that some of your successes don't even get noticed.
The tips covered here are ones that I have picked up over the years that have turned me from being mediocre to a good data architect/database programmer. None of them take extraordinary amounts of time (except perhaps design and planning) but they all take more time upfront than doing it the "easy way". Let's face it, if the easy way were that easy in the long run, I for one would abandon the harder way in a second. It is not until you see the end result that you realize that success comes from starting off right as much as finishing right.
Document your Database. One of the common database design mistakes highlighted by Louis Davidson in this article is "lack of documentation". SQL Doc, Red Gate's database documentation tool, will significantly ease the task of generating and maintaining accurate database documentation, a traditionally time-consuming and error-prone process.

Fundamentals of Relational Database Design

Overview

Database design theory is a topic that many people avoid learning for lack of time. Many others attempt to learn it, but give up because of the dry, academic treatment it is usually given by most authors and teachers. But if creating databases is part of your job, then you're treading on thin ice if you don't have a good solid understanding of relational database design theory. This article begins with an introduction to relational database design theory, including a discussion of keys, relationships, integrity rules, and the often-dreaded "Normal Forms." Following the theory, I present a practical step-by-step approach to good database design.

The Relational Model

The relational database model was conceived by E. F. Codd in 1969, then a researcher at IBM. The model is based on branches of mathematics called set theory and predicate logic. The basic idea behind the relational model is that a database consists of a series of unordered tables (or relations) that can be manipulated using non-procedural operations that return tables. This model was in vast contrast to the more traditional database theories of the time that were much more complicated, less flexible and dependent on the physical storage methods of the data. Note: It is commonly thought that the word relational in the relational model comes from the fact that you relate together tables in a relational database. Although this is a convenient way to think of the term, it's not accurate. Instead, the word relational has its roots in the terminology that Codd used to define the relational model. The table in Codd's writings was actually referred to as a relation (a related set of information). In fact, Codd (and other relational database theorists) use the terms relations, attributes and tuples where most of us use the more common terms tables, columns and rows, respectively (or the more physical—and thus less preferable for discussions of database design theory—files, fields and records). The relational model can be applied to both databases and database management systems (DBMS) themselves. The relational fidelity of database programs can be compared using Codd's 12 rules (since Codd's seminal paper on the relational model, the number of rules has been expanded to 300) for determining how DBMS products conform to the relational model. When compared with other database management programs, Microsoft Access fares quite well in terms of relational fidelity. Still, it has a long way to go before it meets all twelve rules completely. Fortunately, you don't have to wait until Microsoft Access is perfect in a relational sense before you can benefit from the relational model. The relational model can also be applied to the design of databases, which is the subject of the remainder of this article.

Relational Database Design

When designing a database, you have to make decisions regarding how best to take some system in the real world and model it in a database. This consists of deciding which tables to create, what columns they will contain, as well as the relationships between the tables. While it would be nice if this process was totally intuitive and obvious, or even better automated, this is simply not the case. A well-designed database takes time and effort to conceive, build and refine. The benefits of a database that has been designed according to the relational model are numerous. Some of them are:
  • Data entry, updates and deletions will be efficient.
  • Data retrieval, summarization and reporting will also be efficient.
  • Since the database follows a well-formulated model, it behaves predictably.
  • Since much of the information is stored in the database rather than in the application, the database is somewhat self-documenting.
  • Changes to the database schema are easy to make.
The goal of this article is to explain the basic principles behind relational database design and demonstrate how to apply these principles when designing a database using Microsoft Access. This article is by no means comprehensive and certainly not definitive. Many books have been written on database design theory; in fact, many careers have been devoted to its study. Instead, this article is meant as an informal introduction to database design theory for the database developer. Note: While the examples in this article are centered around Microsoft Access databases, the discussion also applies to database development using the Microsoft Visual Basic® programming system, the Microsoft FoxPro® database management system, and the Microsoft SQL Server™ client-server database management system.

Tables, Uniqueness and Keys

Tables in the relational model are used to represent "things" in the real world. Each table should represent only one thing. These things (or entities) can be real-world objects or events. For example, a real-world object might be a customer, an inventory item, or an invoice. Examples of events include patient visits, orders, and telephone calls. Tables are made up of rows and columns. The relational model dictates that each row in a table be unique. If you allow duplicate rows in a table, then there's no way to uniquely address a given row via programming. This creates all sorts of ambiguities and problems that are best avoided. You guarantee uniqueness for a table by designating a primary key—a column that contains unique values for a table. Each table can have only one primary key, even though several columns or combination of columns may contain unique values. All columns (or combination of columns) in a table with unique values are referred to as candidate keys, from which the primary key must be drawn. All other candidate key columns are referred to as alternate keys. Keys can be simple or composite. A simple key is a key made up of one column, whereas a composite key is made up of two or more columns. The decision as to which candidate key is the primary one rests in your hands—there's no absolute rule as to which candidate key is best. Fabian Pascal, in his book SQL and Relational Basics, notes that the decision should be based upon the principles of minimality (choose the fewest columns necessary), stability (choose a key that seldom changes), and simplicity/familiarity (choose a key that is both simple and familiar to users). Let's illustrate with an example. Say that a company has a table of customers called tblCustomer, which looks like the table shown in Figure 1.
Figure 1 Figure 1. The best choice for primary key for tblCustomer would be CustomerId.
Candidate keys for tblCustomer might include CustomerId, (LastName + FirstName), Phone#, (Address, City, State), and (Address + ZipCode). Following Pascal's guidelines, you would rule out the last three candidates because addresses and phone numbers can change fairly frequently. The choice among CustomerId and the name composite key is less obvious and would involve tradeoffs. How likely would a customer's name change (e.g., marriages cause names to change)? Will misspelling of names be common? How likely will two customers have the same first and last names? How familiar will CustomerId be to users? There's no right answer, but most developers favor numeric primary keys because names do sometimes change and because searches and sorts of numeric columns are more efficient than of text columns in Microsoft Access (and most other databases). Counter columns in Microsoft Access make good primary keys, especially when you're having trouble coming up with good candidate keys, and no existing arbitrary identification number is already in place. Don't use a counter column if you'll sometimes need to renumber the values—you won't be able to—or if you require an alphanumeric code—Microsoft Access supports only long integer counter values. Also, counter columns only make sense for tables on the one side of a one-to-many relationship (see the discussion of relationships in the next section). Note: In many situations, it is best to use some sort of arbitrary static whole number (e.g., employee ID, order ID, a counter column, etc.) as a primary key rather than a descriptive text column. This avoids the problem of misspellings and name changes. Also, don't use real numbers as primary keys since they are inexact.

Foreign Keys and Domains

Although primary keys are a function of individual tables, if you created databases that consisted of only independent and unrelated tables, you'd have little need for them. Primary keys become essential, however, when you start to create relationships that join together multiple tables in a database. A foreign key is a column in a table used to reference a primary key in another table. Continuing the example presented in the last section, let's say that you choose CustomerId as the primary key for tblCustomer. Now define a second table, tblOrder, as shown in Figure 2.
Figure 2 Figure 2. CustomerId is a foreign key in tblOrder which can be used to reference a customer stored in the tblCustomer table.
CustomerId is considered a foreign key in tblOrder since it can be used to refer to given customer (i.e., a row in the tblCustomer table). It is important that both foreign keys and the primary keys that are used to reference share a common meaning and draw their values from the same domain. Domains are simply pools of values from which columns are drawn. For example, CustomerId is of the domain of valid customer ID #'s, which in this case might be Long Integers ranging between 1 and 50,000. Similarly, a column named Sex might be based on a one-letter domain equaling 'M' or 'F'. Domains can be thought of as user-defined column types whose definition implies certain rules that the columns must follow and certain operations that you can perform on those columns. Microsoft Access supports domains only partially. For example, Microsoft Access will not let you create a relationship between two tables using columns that do not share the same datatype (e.g., text, number, date/time, etc.). On the other hand, Microsoft Access will not prevent you from joining the Integer column EmployeeAge from one table to the Integer column YearsWorked from a second table, even though these two columns are obviously from different domains.

Relationships

You define foreign keys in a database to model relationships in the real world. Relationships between real-world entities can be quite complex, involving numerous entities each having multiple relationships with each other. For example, a family has multiple relationships between multiple people—all at the same time. In a relational database such as Microsoft Access, however, you consider only relationships between pairs of tables. These tables can be related in one of three different ways: one-to-one, one-to-many or many-to-many.

One-to-One Relationships

Two tables are related in a one-to-one (1—1) relationship if, for every row in the first table, there is at most one row in the second table. True one-to-one relationships seldom occur in the real world. This type of relationship is often created to get around some limitation of the database management software rather than to model a real-world situation. In Microsoft Access, one-to-one relationships may be necessary in a database when you have to split a table into two or more tables because of security or performance concerns or because of the limit of 255 columns per table. For example, you might keep most patient information in tblPatient, but put especially sensitive information (e.g., patient name, social security number and address) in tblConfidential (see Figure 3). Access to the information in tblConfidential could be more restricted than for tblPatient. As a second example, perhaps you need to transfer only a portion of a large table to some other application on a regular basis. You can split the table into the transferred and the non-transferred pieces, and join them in a one-to-one relationship.
Figure 3 Figure 3. The tables tblPatient and tblConfidential are related in a one-to-one relationship. The primary key of both tables is PatientId.
Tables that are related in a one-to-one relationship should always have the same primary key, which will serve as the join column.

One-to-Many Relationships

Two tables are related in a one-to-many (1—M) relationship if for every row in the first table, there can be zero, one, or many rows in the second table, but for every row in the second table there is exactly one row in the first table. For example, each order for a pizza delivery business can have multiple items. Therefore, tblOrder is related to tblOrderDetails in a one-to-many relationship (see Figure 4). The one-to-many relationship is also referred to as a parent-child or master-detail relationship. One-to-many relationships are the most commonly modeled relationship.
Figure 4 Figure 4. There can be many detail lines for each order in the pizza delivery business, so tblOrder and tblOrderDetails are related in a one-to-many relationship.
One-to-many relationships are also used to link base tables to information stored in lookup tables. For example, tblPatient might have a short one-letter DischargeDiagnosis code, which can be linked to a lookup table, tlkpDiagCode, to get more complete Diagnosis descriptions (stored in DiagnosisName). In this case, tlkpDiagCode is related to tblPatient in a one-to-many relationship (i.e., one row in the lookup table can be used in zero or more rows in the patient table).

Many-to-Many Relationships

Two tables are related in a many-to-many (M—M) relationship when for every row in the first table, there can be many rows in the second table, and for every row in the second table, there can be many rows in the first table. Many-to-many relationships can't be directly modeled in relational database programs, including Microsoft Access. These types of relationships must be broken into multiple one-to-many relationships. For example, a patient may be covered by multiple insurance plans and a given insurance company covers multiple patients. Thus, the tblPatient table in a medical database would be related to the tblInsurer table in a many-to-many relationship. In order to model the relationship between these two tables, you would create a third, linking table, perhaps called tblPtInsurancePgm that would contain a row for each insurance program under which a patient was covered (see Figure 5). Then, the many-to-many relationship between tblPatient and tblInsurer could be broken into two one-to-many relationships (tblPatient would be related to tblPtInsurancePgm and tblInsurer would be related to tblPtInsurancePgm in one-to-many relationships).
Figure 5 Figure 5. A linking table, tblPtInsurancePgm, is used to model the many-to-many relationship between tblPatient and tblInsurer.
In Microsoft Access, you specify relationships using the Edit—Relationships command. In addition, you can create ad-hoc relationships at any point, using queries.

Normalization

As mentioned earlier in this article, when designing databases you are faced with a series of choices. How many tables will there be and what will they represent? Which columns will go in which tables? What will the relationships between the tables be? The answers each to these questions lies in something called normalization. Normalization is the process of simplifying the design of a database so that it achieves the optimum structure. Normalization theory gives us the concept of normal forms to assist in achieving the optimum structure. The normal forms are a linear progression of rules that you apply to your database, with each higher normal form achieving a better, more efficient design. The normal forms are:
  • First Normal Form
  • Second Normal Form
  • Third Normal Form
  • Boyce Codd Normal Form
  • Fourth Normal Form
  • Fifth Normal Form
In this article I will discuss normalization through Third Normal Form.

Before First Normal Form: Relations

The Normal Forms are based on relations rather than tables. A relation is a special type of table that has the following attributes:
  1. They describe one entity.
  2. They have no duplicate rows; hence there is always a primary key.
  3. The columns are unordered.
  4. The rows are unordered.
Microsoft Access doesn't require you to define a primary key for each and every table, but it strongly recommends it. Needless to say, the relational model makes this an absolute requirement. In addition, tables in Microsoft Access generally meet attributes 3 and 4. That is, with a few exceptions, the manipulation of tables in Microsoft Access doesn't depend upon a specific ordering of columns or rows. (One notable exception is when you specify the data source for a combo or list box.) For all practical purposes the terms table and relation are interchangeable, and I will use the term table in the remainder of this chapter. It's important to note, however, that when I use the term table, I actually mean a table that also meets the definition of a relation.

First Normal Form

First Normal Form (1NF) says that all column values must be atomic.

The word atom comes from the Latin atomis, meaning indivisible (or literally "not to cut"). 1NF dictates that, for every row-by-column position in a given table, there exists only one value, not an array or list of values. The benefits from this rule should be fairly obvious. If lists of values are stored in a single column, there is no simple way to manipulate those values. Retrieval of data becomes much more laborious and difficult to generalize. For example, the table in Figure 6, tblOrder1, used to store order records for a hardware store, would violate 1NF:
Figure 6 Figure 6. tblOrder1 violates First Normal Form because the data stored in the Items column is not atomic.
You'd have a difficult time retrieving information from this table, because too much information is being stored in the Items field. Think how difficult it would be to create a report that summarized purchases by item. 1NF also prohibits the presence of repeating groups, even if they are stored in composite (multiple) columns. For example, the same table might be improved upon by replacing the single Items column with six columns: Quant1, Item1, Quant2, Item2, Quant3, Item3 (see Figure 7).
Figure 7 Figure 7. A better, but still flawed, version of the Orders table, tblOrder2. The repeating groups of information violate First Normal Form.
While this design has divided the information into multiple fields, it's still problematic. For example, how would you go about determining the quantity of hammers ordered by all customers during a particular month? Any query would have to search all three Item columns to determine if a hammer was purchased and then sum over the three quantity columns. Even worse, what if a customer ordered more than three items in a single order? You could always add additional columns, but where would you stop? Ten items, twenty items? Say that you decided that a customer would never order more than twenty-five items in any one order and designed the table accordingly. That means you would be using 50 columns to store the item and quantity information per record, even for orders that only involved one or two items. Clearly this is a waste of space. And someday, someone would want to order more than 25 items. Tables in 1NF do not have the problems of tables containing repeating groups. The table in Figure 8, tblOrder3, is 1NF since each column contains one value and there are no repeating groups of columns. In order to attain 1NF, I have added a column, OrderItem#. The primary key of this table is a composite key made up of OrderId and OrderItem#.
Figure 8 Figure 8. The tblOrder3 table is in First Normal Form.
You could now easily construct a query to calculate the number of hammers ordered. The query in Figure 9 is an example of such a query.
Figure 9 Figure 9. Since tblOrder3 is in First Normal Form, you can easily construct a Totals query to determine the total number of hammers ordered by customers.

Second Normal Form

A table is said to be in Second Normal Form (2NF), if it is in 1NF and every non-key column is fully dependent on the (entire) primary key.

Put another way, tables should only store data relating to one "thing" (or entity) and that entity should be described by its primary key. The table shown in Figure 10, tblOrder4, is slightly modified version of tblOrder3. Like tblOrder3, tblOrder4 is in First Normal Form. Each column is atomic, and there are no repeating groups.
Figure 10 Figure 10. The tblOrder4 table is in First Normal Form. Its primary key is a composite of OrderId and OrderItem#.
To determine if tblOrder4 meets 2NF, you must first note its primary key. The primary key is a composite of OrderId and OrderItem#. Thus, in order to be 2NF, each non-key column (i.e., every column other than OrderId and OrderItem#) must be fully dependent on the primary key. In other words, does the value of OrderId and OrderItem# for a given record imply the value of every other column in the table? The answer is no. Given the OrderId, you know the customer and date of the order, without having to know the OrderItem#. Thus, these two columns are not dependent on the entire primary key which is composed of both OrderId and OrderItem#. For this reason tblOrder4 is not 2NF. You can achieve Second Normal Form by breaking tblOrder4 into two tables. The process of breaking a non-normalized table into its normalized parts is called decomposition. Since tblOrder4 has a composite primary key, the decomposition process is straightforward. Simply put everything that applies to each order in one table and everything that applies to each order item in a second table. The two decomposed tables, tblOrder and tblOrderDetail, are shown in Figure 11.
Figure 11a Figure 11b Figure 11. The tblOrder and tblOrderDetail tables satisfy Second Normal Form. OrderId is a foreign key in tblOrderDetail that you can use to rejoin the tables.
Two points are worth noting here.
  • When normalizing, you don't throw away information. In fact, this form of decomposition is termed non-loss decomposition because no information is sacrificed to the normalization process.
  • You decompose the tables in such a way as to allow them to be put back together again using queries. Thus, it's important to make sure that tblOrderDetail contains a foreign key to tblOrder. The foreign key in this case is OrderId which appears in both tables.

Third Normal Form

A table is said to be in Third Normal Form (3NF), if it is in 2NF and if all non-key columns are mutually independent.

An obvious example of a dependency is a calculated column. For example, if a table contains the columns Quantity and PerItemCost, you could opt to calculate and store in that same table a TotalCost column (which would be equal to Quantity*PerItemCost), but this table wouldn't be 3NF. It's better to leave this column out of the table and make the calculation in a query or on a form or a report instead. This saves room in the database and avoids having to update TotalCost, every time Quantity or PerItemCost changes. Dependencies that aren't the result of calculations can also exist in a table. The tblOrderDetail table from Figure 11, for example, is in 2NF because all of its non-key columns (Quantity, ProductId and ProductDescription) are fully dependent on the primary key. That is, given an OderID and an OrderItem#, you know the values of Quantity, ProductId and ProductDescription. Unfortunately, tblOrderDetail also contains a dependency among two if its non-key columns, ProductId and ProductDescription. Dependencies cause problems when you add, update, or delete records. For example, say you need to add 100 detail records, each of which involves the purchase of screwdrivers. This means you would have to input a ProductId code of 2 and a ProductDescription of "screwdriver" for each of these 100 records. Clearly this is redundant. Similarly, if you decide to change the description of the item to "No. 2 Phillips-head screwdriver" at some later time, you will have to update all 100 records. Another problem arises when you wish to delete all of the 1994 screwdriver purchase records at the end of the year. Once all of the records are deleted, you will no longer know what ProductId of 2 is, since you've deleted from the database both the history of purchases and the fact that ProductId 2 means "No. 2 Phillips-head screwdriver." You can remedy each of these anomalies by further normalizing the database to achieve Third Normal Form. Note: An Anomaly is simply an error or inconsistency in the database. A poorly designed database runs the risk of introducing numerous anomalies. There are three types of anomalies:
  • Insertion: an anomaly that occurs during the insertion of a record. For example, the insertion of a new row causes a calculated total field stored in another table to report the wrong total.
  • Deletion: an anomaly that occurs during the deletion of a record. For example, the deletion of a row in the database deletes more information than you wished to delete.
  • Update: an anomaly that occurs during the updating of a record. For example, updating a description column for a single part in an inventory database requires you to make a change to thousands of rows.
The tblOrderDetail table can be further decomposed to achieve 3NF by breaking out the ProductId—ProductDescription dependency into a lookup table as shown in Figure 12. This gives you a new order detail table, tblOrderDetail1 and a lookup table, tblProduct. When decomposing tblOrderDetail, take care to put a copy of the linking column, in this case ProductId, in both tables. ProductId becomes the primary key of the new table, tblProduct, and becomes a foreign key column in tblOrderDetail1. This allows you to easily join together the two tables using a query.
Figure 12a Figure 12b Figure 12. The tbOrderDetail1 and tblProduct tables are in Third Normal Form. The ProductId column in tblOrderDetail1 is a foreign key referencing tblProduct.

Higher Normal Forms

After Codd defined the original set of normal forms it was discovered that Third Normal Form, as originally defined, had certain inadequacies. This led to several higher normal forms, including the Boyce/Codd, Fourth and Fifth Normal Forms. I will not be covering these higher normal forms, instead, several points are worth noting here:
  • Every higher normal form is a superset of all lower forms. Thus, if your design is in Third Normal Form, by definition it is also in 1NF and 2NF.
  • If you've normalized your database to 3NF, you've likely also achieved Boyce/Codd Normal Form (and maybe even 4NF or 5NF).
  • To quote C.J. Date, the principles of database design are "nothing more than formalized common sense."
  • Database design is more art than science.
This last item needs to be emphasized. While it's relatively easy to work through the examples in this article, the process gets more difficult when you are presented with a business problem (or another scenario) that needs to be computerized (or downsized). I have outlined an approach to take later in this article, but first the subject of integrity rules will be discussed.

Integrity Rules

The relational model defines several integrity rules that, while not part of the definition of the Normal Forms are nonetheless a necessary part of any relational database. There are two types of integrity rules: general and database-specific.

General Integrity Rules

The relational model specifies two general integrity rules. They are referred to as general rules, because they apply to all databases. They are: entity integrity and referential integrity. The entity integrity rule is very simple. It says that primary keys cannot contain null (missing) data. The reason for this rule should be obvious. You can't uniquely identify or reference a row in a table, if the primary key of that table can be null. It's important to note that this rule applies to both simple and composite keys. For composite keys, none of the individual columns can be null. Fortunately, Microsoft Access automatically enforces the entity integrity rule for you. No component of a primary key in Microsoft Access can be null. The referential integrity rule says that the database must not contain any unmatched foreign key values. This implies that:
  • A row may not be added to a table with a foreign key unless the referenced value exists in the referenced table.
  • If the value in a table that's referenced by a foreign key is changed (or the entire row is deleted), the rows in the table with the foreign key must not be "orphaned."
In general, there are three options available when a referenced primary key value changes or a row is deleted. The options are:
  • Disallow. The change is completely disallowed.
  • Cascade. For updates, the change is cascaded to all dependent tables. For deletions, the rows in all dependent tables are deleted.
  • Nullify. For deletions, the dependent foreign key values are set to Null.
Microsoft Access allows you to disallow or cascade referential integrity updates and deletions using the Edit | Relationships command (see Figure 13). Nullify is not an option.
Figure 13 Figure 13. Specifying a relationship with referential integrity between the tblCustomer and tblOrder tables using the Edit | Relationships command. Updates of CustomerId in tblCustomer will be cascaded to tblOrder. Deletions of rows in tblCustomer will be disallowed if rows in tblOrders would be orphaned.
Note: When you wish to implement referential integrity in Microsoft Access, you must perform one additional step outside of the Edit | Relationships dialog: in table design, you must set the Required property for the foreign key column to Yes. Otherwise, Microsoft Access will allow your users to enter a Null foreign key value, thus violating strict referential integrity.

Database-Specific Integrity Rules

All integrity constraints that do not fall under entity integrity or referential integrity are termed database-specific rules or business rules. These type of rules are specific to each database and come from the rules of the business being modeled by the database. It is important to note that the enforcement of business rules is as important as the enforcement of the general integrity rules discussed in the previous section. Note: Rules in Microsoft Access 2.0 are now enforced at the engine level, which means that forms, action queries and table imports can no longer ignore your rules. Because of this change, however, column rules can no longer reference other columns or use domain, aggregate, or user-defined functions. Without the specification and enforcement of business rules, bad data will get in the database. The old adage, "garbage in, garbage out" applies aptly to the application (or lack of application) of business rules. For example, a pizza delivery business might have the following rules that would need to be modeled in the database:
  • Order date must always be between the date the business started and the current date.
  • Order time and delivery time can be only during business hours.
  • Delivery time must be greater than or equal to Order time.
  • New orders cannot be created for discontinued menu items.
  • Customer zip codes must be within a certain range—the delivery area.
  • The quantity ordered can never be less than 1 or greater than 50.
  • Non-null discounts can never be less than 1 percent or greater than 30 percent.
Microsoft Access 2.0 supports the specification of validation rules for each column in a table. For example, the first business rule from the above list has been specified in Figure 14.
Figure 14 Figure 14. A column validation rule has been created to limit all order dates to some time between the first operating day of the business (5/3/93) and the current date.
Microsoft Access 2.0 also supports the specification of a global rule that applies to the entire table. This is useful for creating rules that cross-reference columns as the example in Figure 15 demonstrates. Unfortunately, you're only allowed to create one global rule per table, which could make for some awful validation error messages (e.g., "You have violated one of the following rules: 1. Delivery Date > Order Date. 2. Delivery Time > Order Time....").
Figure 15 Figure 15. A table validation rule has been created to require that deliveries be made on or after the date the pizza was ordered.
Although Microsoft Access business-rule support is better than most other desktop DBMS programs, it is still limited (especially the limitation of one global table rule), so you will typically build additional business rule logic into applications, usually in the data entry forms. This logic should be layered on top of any table-based rules and can be built into the application using combo boxes, list-boxes and option groups that limit available choices, form-level and field-level validation rules, and event procedures. These application-based rules, however, should be used only when the table-based rules cannot do the job. The more you can build business rules in at the table level, the better, because these rules will always be enforced and will require less maintenance.

A Practical Approach to Database Design

As mentioned earlier in this article, database design is more art than science. While it's true that a properly designed database should follow the normal forms and the relational model, you still have to come up with a design that reflects the business you are trying to model. Relational database design theory can usually tell you what not to do, but it won't tell you where to start or how to manage your business. This is where it helps to understand the business (or other scenario) you are trying to model. A well-designed database requires business insight, time, and experience. Above all, it shouldn't be rushed. To assist you in the creation of databases, I've outlined the following 20-step approach to sound database design:
  1. Take some time to learn the business (or other system) you are trying to model. This will usually involve sitting down and meeting with the people who will be using the system and asking them lots of questions.
  2. On paper, write out a basic mission statement for the system. For example, you might write something like "This system will be used to take orders from customers and track orders for accounting and inventory purposes." In addition, list out the requirements of the system. These requirements will guide you in creating the database schema and business rules. For example, create a list that includes entries such as "Must be able to track customer address for subsequent direct mail."
  3. Start to rough out (on paper) the data entry forms. (If rules come to mind as you lay out the tables, add them to the list of requirements outlined in step 2.) The specific approach you take will be guided by the state of any existing system.
    • If this system was never before computerized, take the existing paper-based system and rough out the table design based on these forms. It's very likely that these forms will be non-normalized.
    • If the database will be converted from an existing computerized system, use its tables as a starting point. Remember, however, that it's very likely that the existing schema will be non-normalized. It's much easier to normalize the database now rather than later. Print out the existing schema, table by table, and the existing data entry forms to use in the design process.
    • If you are really starting from scratch (e.g., for a brand new business), then rough out on paper what forms you envision filling out.
  4. Based on the forms, you created in step 3, rough out your tables on paper. If normalization doesn't come naturally (or from experience), you can start by creating one huge, non-normalized table per form that you will later normalize. If you're comfortable with normalization theory, try and keep it in mind as you create your tables, remembering that each table should describe a single entity.
  5. Look at your existing paper or computerized reports. (If you're starting from scratch, rough out the types of reports you'd like to see on paper.) For existing systems that aren't currently meeting the user needs, it's likely that key reports are missing. Create them now on paper.
  6. Take the roughed-out reports from step 5 and make sure that the tables from step 4 include this data. If information is not being collected, add it to the existing tables or create new ones.
  7. On paper, add several rows to each roughed-out table. Use real data if at all possible.
  8. Start the normalization process. First, identify candidate keys for every table and using the candidates, choose the primary key. Remember to choose a primary key that is minimal, stable, simple, and familiar. Every table must have a primary key! Make sure that the primary key will guard against all present and future duplicate entries.
  9. Note foreign keys, adding them if necessary to related tables. Draw relationships between the tables, noting if they are one-to-one or one-to-many. If they are many-to-many, then create linking tables.
  10. Determine whether the tables are in First Normal Form. Are all fields atomic? Are there any repeating groups? Decompose if necessary to meet 1NF.
  11. Determine whether the tables are in Second Normal Form. Does each table describe a single entity? Are all non-key columns fully dependent on the primary key? Put another way, does the primary key imply all of the other columns in each table? Decompose to meet 2NF. If the table has a composite primary key, then the decomposition should, in general, be guided by breaking the key apart and putting all columns pertaining to each component of the primary key in their own tables.
  12. Determine if the tables are in Third Normal Form. Are there any computed columns? Are there any mutually dependent non-key columns? Remove computed columns. Eliminate mutual dependent columns by breaking out lookup tables.
  13. Using the normalized tables from step 12, refine the relationships between the tables.
  14. Create the tables using Microsoft Access (or whatever database program you are using). If using Microsoft Access, create the relationships between the tables using the Edit | Relationships command. Add sample data to the tables.
  15. Create prototype queries, forms, and reports. While creating these objects, design deficiencies should become obvious. Refine the design as needed.
  16. Bring the users back in. Have them evaluate your forms and reports. Are their needs met? If not, refine the design. Remember to re-normalize if necessary (steps 8-12).
  17. Go back to the table design screen and add business rules.
  18. Create the final forms, reports, and queries. Develop the application. Refine the design as necessary.
  19. Have the users test the system. Refine the design as needed.
  20. Deliver the final system.
This list doesn't cover every facet of the design process, but it's useful as a framework for the process.

Breaking the Rules: When to Denormalize

Sometimes it's necessary to break the rules of normalization and create a database that is deliberately less normal than it otherwise could be. You'll usually do this for performance reasons or because the users of the database demand it. While this won't get you any points with database design purists, ultimately you have to deliver a solution that satisfies your users. If you do break the rules, however, and decide to denormalize you database, it's important that you follow these guidelines:
  • Break the rules deliberately; have a good reason for denormalizing.
  • Be fully aware of the tradeoffs this decision entails.
  • Thoroughly document this decision.
  • Create the necessary application adjustments to avoid anomalies.
This last point is worth elaborating on. In most cases, when you denormalize, you will be required to create additional application code to avoid insertion, update, and deletion anomalies that a more normalized design would avoid. For example, if you decide to store a calculation in a table, you'll need to create extra event procedure code and attach it to the appropriate event properties of forms that are used to update the data on which the calculation is based. If you're considering denormalizing for performance reasons, don't always assume that the denormalized approach is the best. Instead, I suggest you first fully normalize the database (to Third Normal Form or higher) and then denormalize only if it becomes necessary for reasons of performance. If you're considering denormalizing because your users think they need it, investigate why. Often they will be concerned about simplifying data entry, which you can usually accomplish by basing forms on queries while keeping your base tables fully normalized. Here are several scenarios where you might choose to break the rules of normalization:
  • You decide to store an indexed computed column, Soundex, in tblCustomer to improve query performance, in violation of 3NF (because Soundex is dependent on LastName). The Soundex column contains the sound-alike code for the LastName column. It's an indexed column (with duplicates allowed) and is calculated using a user-defined function. If you wish to perform searches on the Soundex column with any but the smallest tables, you'll find a significant performance advantage to storing the Soundex column in the table and indexing this computed column. You'd likely use an event procedure attached to a form to perform the Soundex calculation and store the result in the Soundex column. To avoid update anomalies, you'll want to ensure that this column cannot be updated by the user and that it is updated every time LastName changes.
  • In order to improve report performance, you decide to create a column named TotalOrderCost that contains a sum of the cost of each order item in tblOrder. This violates 2NF because TotalOrderCost is dependent on the primary key of tblOrderDetail, not on tblOrder's primary key. TotalOrderCost is calculated on a form by summing the column TotalCost for each item. Since you often create reports that need to include the total order cost, but not the cost of individual items, you've broken 2NF to avoid having to join these two tables every time this report needs to be generated. As in the last example, you have to be careful to avoid update anomalies. Whenever a record in tblOrderDetail is inserted, updated, or deleted, you will need to update tblOrder, or the information stored there will be erroneous.
  • You decide to include a column, SalesPerson, in the tblInvoice table, even though SalesId is also included in tblInvoice. This violates 3NF because the two non-key columns are mutually dependent, but it significantly improves the performance of certain commonly run reports. Once again, this is done to avoid a join to the tblEmployee table, but introduces redundancies and adds the risk of update anomalies.

Summary

This article has covered the basics of database design in the context of Microsoft Access. The main concepts covered were:
  • The relational database model was created by E.F. Codd in 1969 and is founded on set theory and logic.
  • A database designed according to the relational model will be efficient, predictable, well performing, self-documenting and easy to modify.
  • Every table must have a primary key, which uniquely identifies rows in the table.
  • Foreign keys are columns used to reference a primary key in another table.
  • You can establish three kinds of relationships between tables in a relational database: one-to-one, one-to-many or many-to-many. Many-to-many relationships require a linking table.
  • Normalization is the process of simplifying the design of a database so that it achieves the optimum structure.
  • A well-designed database follows the Normal Forms.
  • The entity integrity rule forbids nulls in primary key columns.
  • The referential integrity rule says that the database must not contain any unmatched foreign key values.
  • Business rules are an important part of database integrity.
  • A well-designed database requires business insight, time, and experience.
  • Occasionally, you made need to denormalize for performance.
Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.
source:http://www.deeptraining.com/litwin/dbdesign/FundamentalsOfRelationalDatabaseDesign.aspx By Paul Litwin