close

Blood Sugar Levels Effects Elokuva

In the last advice I made, I found that on the server database, the main database was not automatically create statistics as the need be. This clearly impacts the server running SQL Server since it requires them to work to the fullest.

This publication will explain that statistics are SQL Server and used. We will also see why it is important that they exist, and that happens to not create and automatically update them. We'll leave for another time when it might be nice to have a manual disable them or control them.

Every time we create or update indexes on any table in SQL Server, we are presented with an option that asks whether or not to update the statistics, and that comes by default instructing the server to update the statistics. Moreover, the properties of a database is resubmitted a similar option, asking if we want to create statistics and if we update them automatically in the same way.

Let's start then. What are the statistics for SQL Server and used?

SQL Server statistics (from now on, "statistics") are information on the distribution of existing data in the columns of the tables in our database. Through statistics, the server information is called a column, for example, varies greatly, if all data are equal and levels of variation there. These allow the server to "know" the data in columns without having to read them often. Really he does not know all the data, but the information you get is enough to make good decisions.

This information is used when we ask certain data tables that meet certain conditions (select .. from .. where). Specify the conditions in the where of a query are analyzed by the query optimizer to determine which is the fastest way to get the required information. To achieve this, assuming we have a consultation with a number of conditions in the where, the server examines the statistics associated with the columns referenced in where, as existing indexes on the tables and columns where those involved. In the case of indexes, SQL Server manages a set of statistics in a manner similar to a column of an independent type of table index (clustered or non-clustered).

In the case where the query has only one condition, there are many possible solutions. If there is an index on the column in which we are looking for, in most cases you will use (depend on statistics and other factors) and otherwise perform a SCAN on the table or clustered index (if you have ).

Let's look at a simple example as SQL Server handles statistics. With our database configured to automatically create statistics, generate one table with the following structure:

CREATE TABLE [dbo].[TEST] ([identificador] [int] IDENTITY (1, 1) NOT NULL CONSTRAINT [PK_TEST] PRIMARY KEY CLUSTERED , [nombre] [varchar] (50) COLLATE Modern_Spanish_CI_AS NOT NULL , [apellido] [varchar] (50) COLLATE Modern_Spanish_CI_AS NOT NULL , [direccion] [varchar] (100) COLLATE Modern_Spanish_CI_AS NOT NULL , [fechanacimiento] [datetime] NOT NULL , [login] [varchar] (20) COLLATE Modern_Spanish_CI_AS NULL ) ON [PRIMARY] GO

Now we insert some records in the base table test, copying them from EMPLOYEE in the PUBS database.

insert into test (nombre, apellido, direccion, fechanacimiento) (select fname, lname, fname ‘ ‘ lname ‘ ‘ cast(hire_date as varchar(100)), hire_date from pubs..employee)

Now we have our table with a clustered (cluster) in the identity column index, and a number of small records. For the purposes of our demo it is not necessary to have lots of records.

There are two traditional ways of viewing statistics. One is by consulting the internal catalog of SQL Server or the other is through the graphical interface associated with the execution plan for a query. As for now it is not my intention to see maintenance plans, see how to get the statistics from the internal catalog of SQL Server.

Before seeing how it works, we must know and internal catalogs are queried. Some of the tables to which we refer in consultations do not necessarily exist physically and many of them, or are seen alone or are built when running. In addition, objects (tables, procedures, etc.) are rarely stored by the name we give, but by an internal identifier. For the internal identifier of an object, there is a function called object_id ( 'object') that returns and can be used in a query or a set statement. The catalog where the index information and statistics for a table is stored is called sysindexes. To view it must then filter the information by the id of the TEST table, as shown in the following query:

select * from sysindexes where id = object_id(‘test’)

As a result of this consultation, they can get something like the following. It is worth mentioning that is clipped to the right. The value of id and other columns vary in each environment.

id status first indid root minlen keycnt groupid dpages … ———– ———– ————– —— ————– —— —— ——- ———– … 1043495492 2066 0x300000000100 1 0x470000000100 16 1 1 2 …

The result indicates that for the test (id = 1043495492) table, there is created a single clustered index (indid = 1) and using 2 pages (dpages = 2).

If we make now a simple query on the table looking for the last column, which we know is not considered at any rate, on sysindexes changes will be made. For example, if the following query returns no records or statistics be created (because our database is configured to automatically create them).

select * from Test WHERE name = 'Smith'

Again making the consultation on sysindexes, the result changes, resulting in a new record. Is known to be a statistic as the number of pages (dpages = 0) and the group is 0.

id status first indid root minlen keycnt groupid dpages … name … ———– ———– ————– —— ————– —— —— ——- ———– … ————————- … 1043495492 2066 0x300000000100 1 0x470000000100 16 1 1 2 … PK_TEST … 1043495492 8388704 NULL 2 NULL 0 1 0 0 … _WA_Sys_00000003_3E327A44 …

As we see, it was created without statistics to worry. Now, if you want to know that there is in the statistics, there is an instruction called dbcc showstatistics administration (table, index | statistics) to do so. The result of the execution of dbcc SHOW_STATISTICS (test, _WA_Sys_00000003_3E327A44) unfolds now.

Name Updated Rows Rows Sampled Steps Density Average key length String Index ————————– ——————– —– ————– —— ————- —————— ———— _WA_Sys_00000003_3E327A44 Feb 2 2006 8:55PM 100 100 89 1 14,23 YES All density Average Length Columns ————- ————– ——————- 0,01098901 14,23 apellido RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS ————————————————– ————- ————- ——————– ————– administrador 0 4 0 1 Aladino Carcamo 0 1 0 1 Alvaro Vega 0 1 0 1 … … Vladimir Vera 0 1 0 1

Although the result may seem complicated, it is not much. The result is divided into three groups.

The first group gives us an overview of statistics. Here we find the name, date of update, the number of rows in the table (rows = 100), the number of rows that were considered for the sample (Rows Sampled = 100), the number of steps (steps = 89) ( explained later), the density (not consider this specific value since the density is measured better later) and the average data length of the column in the case of a statistical or average length of the index data if be an index.

The second group shows specific data associated with the column. In this case, the density (0.01098901), the long average (seen) and the column. In the case of an index information it has several lines and columns densities index showing from the first column to all columns together. The density is obtained by calculating the following equation:

Density = 1 / (cardinality key index)

The cardinality of the key corresponds to the number of unique data in the column or columns. The important thing is that the density is the smallest possible value. The smaller, SQL Server will get better results in search. For example, if the density of an index is 0.3, means that only you can filter up to 30% of the data with that index, the result can be considered very bad. Good value must be below 5%. In our example, a density of 0.0109 (1%) means that there is 1 / 0.0109 different values, or 91. The query select count (distinct (name)) from test confirms the result in the table.

The third block corresponds to the distribution of column data in the table. For a multi-column index is considered only the value of the first column. The information is segmented ranges (steps = 89), where the values ​​corresponding to the data that is between the RANGE_HI_KEY that line on each line and are less than RANGE_HI_KEY line below. As the explanation is not entirely clear, with such insurance shall apply. In the result displayed before you get between administrator and Aladino Carcamo, no more values ​​(RANGE_ROWS = 0), there are 4 equal values ​​(EQ_ROWS = 4), no different values ​​in the range without considering the same administrator value (DISTINCT_RANGE_ROWS = 0), and finally, the average number of rows (number) for each different value in the range is one (AVG_RANGE_ROWS = 1). Corresponds DISTINCT_RANGE_ROWS noted that does not include the rows are equal to RANGE_HI_KEY as these are included in EQ_ROWS.

All this information lets you know the query optimizer as the column information or index without "touching" data. And for the same reason, if we want the analyzer always find the best option and the server responds to the fullest, we must provide it with updated statistics.

Finally, we can mention that statistics can be updated or deleted manually through the graphical interface or SQL queries (drop statistics). In addition, it notes that SQL Server is responsible for updating and remove them when considers it necessary, but can also be added maintenance task that update every so often.


blood sugar levels explained     blood sugar levels below 40


TAGS


CATEGORIES

.