Showing posts with label effect. Show all posts
Showing posts with label effect. Show all posts

Friday, February 17, 2012

Effect on snapshots while reindexing

Does someone know if doing a reindex on a clustered or non-clustered index cause the snapshot file to grow? In other words, is the data that makes up the snapshot copied from the source to the snapshot database? If a normal reindex is done on the underlying database, will it block users from acessing the snapshot? Any help would be appreciated.

All the pages changed by the reindexing would be pushed to the snapshot so its files would grow. It should not cause any blocking in the snapshot, though there can be a significant increase in the IO load.

Effect of Shrinking a DB

Just wondering, other than the effect of making the database smaller, how does the shrink call affect actual performance? Should I expect an performance increase after it is done? Thanks.
~piroGenerally all shrink does it reduce the size of the database files, no real performance gain as far as I know. You have to ask your self why did they grow to that size in the first place, before shrinking them as it is a performance hit when they have to grow again.|||Thank you for replying.

So there really is no performance decrease when I shrink the database. The performance hit will come from db growth, which is normal. Do you know of any other commands or functions that are similar to Access's compact command?

~piro|||Not sure what Access's compact command does, but DBCC SHRINKDATABASE can move all data to contiguous pages and remove any remaining unused space. This has the potential of reducing I/O since you would need to read fewer pages to access all data.|||Good point, forgot about the moving of data pages.

effect of more column in where clause

what are the effects of having number of columns in WHERE clause, ie: if we use more columns in where clause, what will be its impact on performance.

this is important for me to design queries

having a lot of unneccessary where clauses will surely slowdown the query if there are.

the best thing that you can do is to streamline the logic into its simplest form

|||Perhaps I misunderstand the question; my normal experience is that if the additional columns contribute to filtering out unwanted records that the additional columns in the where clause usually enhance performance.|||

yeah thats right but a poorly written where clause will definetely slowdown performance.

The trick is to use the best suited functions and keywords in the where clause

here some useful link:

http://www.sql-server-performance.com/transact_sql.asp

regards,

joey

|||

Hi joeydj,

what do you mean by "the trick is to use the best suited where clause by making use of the powerful sql server functions"?

AMB

|||

oh sorry. got a grammar problem

here i have it corrected

"The trick is to use the best suited functions and keywords in the where clause"

hmmm... thats better..

example:

select ... from

where x=1 and x=2 and x=7 and x=9

maybe written as

where x in (1,2,7,9)

|||

Joey is normally better than that; please cut him some slack. I think what he means is

select ... from

where x=1 OR x=2 OR x=7 OR x=9

maybe written as

where x in (1,2,7,9)

|||

hahaha. not been here for sometime.

hmmm thanks kent.

|||

Not to mention that there is no performance benefit in choosing one of these as opposed to the other, anyway. (There is a readability benefit, and the two forms will behave differently if you carelessly tack on AND <another condition> to the query.

Steve Kass

Drew University

http://www.stevekass.com

|||

Can you give an example of the choices you have to make? Usually if you add "more columns in where clause", you change the meaning of the query (but not always), and the first goal of designing a query is for it to ask the right question...

Steve Kass

Drew University

http://www.stevekass.com

|||

One thing that people sometimes forget is that a full table scan can sometimes be better than an indexed scan. If you are ultimately going to read every block of data from disk via an index scan then you might want to forgo the index. Modern databases with good statistics can usually establish an execution plan that is "good enough" but it can still pay to understand your data.

Here is a quick example.

Let's say I have a database of people and for some reason their geneder in overwhelmingly biased in one direction. An optimizer might look at the number of unique values for gender and assume a 50/50 split in the data. It might make good sence to always use an index on gender to access the rows of the table from the optimizer's best guess. However, that might not be the case in practice.

Let's say the table is 100k records with :

90% F

10% M

To find the men an indexed lookup on gender is probably a good thing. On the other hand an indexed lookup to find women will result in more disk IO and slower performance than a full table scan (not counting your network).

Knowing this distribution ahead of time might lead someone looking for all women to do something like

Select * from employees where gender + '' = 'F'

|||

thanks for all your views.

let me give u a specific example.

my database has "branch name" field in all the tables. and we have seperate copy of database for each branch. so a particular branch user will connect to his branch, which has records only for that branch.

in this situation, there is no need to filter the records again with "branch name", but if there is no performance issue , i wish to include it in the WHERE clause to be 100% sure that all the records that the query output does have the same branch name.

|||

In your situation. I think you could add "branch name" column without perfomance issue. But you need to create index or statictics for this column. As a result query optimizer understands that "branch name" same for all records and doesn't use them for plan.

But it any case your could check query plans and only after this decide

|||

This is a second good example of what I am talking about.

If your database is physically segragated by branch already then you want to make sure you know what the optimizer is doing when you add a "failsafe"

Where BranchId = 10

or alternatively

Where BranchName = 'Downtown'

to your queries.

If adding that clause causes the optimizer to include an index on BranchId as part of the execution plan then you will hurt your performance as a result of extra disk i/o and memory use. This might or might not be a concern for your infrastructure. Though it sounds like if you are physically partitioning your database by branch then you might be concerned about database performance.

Although it might seem counter intuative, it might be a good idea to experiment with:

Where BranchId + 0 = 10

or alternatively

Where BranchName + '' = 'Downtown'

This would ensure than no index on BranchId could be used in the execution plan.

|||

Dear AMERMSAMER,

The bottom line is that having more columns in the WHERE clause does not necessarily slow down performance and may, in fact, actually improve performance. It all depends on the indexes on the table and the "selectivity" of the columns in the WHERE. If there is a unique key and all of the columns in the key are "covered" by the WHERE clause, no more may be needed. SQL will try to optimize the query by using these columns where available.

effect of more column in where clause

what are the effects of having number of columns in WHERE clause, ie: if we use more columns in where clause, what will be its impact on performance.

this is important for me to design queries

having a lot of unneccessary where clauses will surely slowdown the query if there are.

the best thing that you can do is to streamline the logic into its simplest form

|||Perhaps I misunderstand the question; my normal experience is that if the additional columns contribute to filtering out unwanted records that the additional columns in the where clause usually enhance performance.|||

yeah thats right but a poorly written where clause will definetely slowdown performance.

The trick is to use the best suited functions and keywords in the where clause

here some useful link:

http://www.sql-server-performance.com/transact_sql.asp

regards,

joey

|||

Hi joeydj,

what do you mean by "the trick is to use the best suited where clause by making use of the powerful sql server functions"?

AMB

|||

oh sorry. got a grammar problem

here i have it corrected

"The trick is to use the best suited functions and keywords in the where clause"

hmmm... thats better..

example:

select ... from

where x=1 and x=2 and x=7 and x=9

maybe written as

where x in (1,2,7,9)

|||

Joey is normally better than that; please cut him some slack. I think what he means is

select ... from

where x=1 OR x=2 OR x=7 OR x=9

maybe written as

where x in (1,2,7,9)

|||

hahaha. not been here for sometime.

hmmm thanks kent.

|||

Not to mention that there is no performance benefit in choosing one of these as opposed to the other, anyway. (There is a readability benefit, and the two forms will behave differently if you carelessly tack on AND <another condition> to the query.

Steve Kass

Drew University

http://www.stevekass.com

|||

Can you give an example of the choices you have to make? Usually if you add "more columns in where clause", you change the meaning of the query (but not always), and the first goal of designing a query is for it to ask the right question...

Steve Kass

Drew University

http://www.stevekass.com

|||

One thing that people sometimes forget is that a full table scan can sometimes be better than an indexed scan. If you are ultimately going to read every block of data from disk via an index scan then you might want to forgo the index. Modern databases with good statistics can usually establish an execution plan that is "good enough" but it can still pay to understand your data.

Here is a quick example.

Let's say I have a database of people and for some reason their geneder in overwhelmingly biased in one direction. An optimizer might look at the number of unique values for gender and assume a 50/50 split in the data. It might make good sence to always use an index on gender to access the rows of the table from the optimizer's best guess. However, that might not be the case in practice.

Let's say the table is 100k records with :

90% F

10% M

To find the men an indexed lookup on gender is probably a good thing. On the other hand an indexed lookup to find women will result in more disk IO and slower performance than a full table scan (not counting your network).

Knowing this distribution ahead of time might lead someone looking for all women to do something like

Select * from employees where gender + '' = 'F'

|||

thanks for all your views.

let me give u a specific example.

my database has "branch name" field in all the tables. and we have seperate copy of database for each branch. so a particular branch user will connect to his branch, which has records only for that branch.

in this situation, there is no need to filter the records again with "branch name", but if there is no performance issue , i wish to include it in the WHERE clause to be 100% sure that all the records that the query output does have the same branch name.

|||

In your situation. I think you could add "branch name" column without perfomance issue. But you need to create index or statictics for this column. As a result query optimizer understands that "branch name" same for all records and doesn't use them for plan.

But it any case your could check query plans and only after this decide

|||

This is a second good example of what I am talking about.

If your database is physically segragated by branch already then you want to make sure you know what the optimizer is doing when you add a "failsafe"

Where BranchId = 10

or alternatively

Where BranchName = 'Downtown'

to your queries.

If adding that clause causes the optimizer to include an index on BranchId as part of the execution plan then you will hurt your performance as a result of extra disk i/o and memory use. This might or might not be a concern for your infrastructure. Though it sounds like if you are physically partitioning your database by branch then you might be concerned about database performance.

Although it might seem counter intuative, it might be a good idea to experiment with:

Where BranchId + 0 = 10

or alternatively

Where BranchName + '' = 'Downtown'

This would ensure than no index on BranchId could be used in the execution plan.

|||

Dear AMERMSAMER,

The bottom line is that having more columns in the WHERE clause does not necessarily slow down performance and may, in fact, actually improve performance. It all depends on the indexes on the table and the "selectivity" of the columns in the WHERE. If there is a unique key and all of the columns in the key are "covered" by the WHERE clause, no more may be needed. SQL will try to optimize the query by using these columns where available.

Effect of Joins on the Speed

Hi..

I want to know that when I do something like..

Select Query

Left Join

Select Query 2

Left Join

Select Query 3

How does it work actually?

As in, whether Query 2 & Query 3 will work only on the records retrieved by Query 1 only.

Or, all the select statements retrieves all the records and then the condition is applied to filter out the results.

Also, does the order of the Select statements make any difference on the speed?

Thanks

Sql server has a component called the query optimizer. It will read the query and then devise what it thinks is the best plan to come up with the results. You can see this plan when you type a query in Management Studio (or Query Analyzer in 2000) and then press CTRL-L.

The order of the joins does not make a difference, unless you tell the optimizer to join the tables in a specific order by using the 'force order' query hint. I have had complex queries where using a hint reduced execution time dramatically, but usually you're better of letting SQL sort it out. What definitely will improve performance is coming up with the right indexing strategy.

The best book i know on this subject is 'Inside MS sql server' that i read for 2000. I'm not sure about the 2005 version, i haven't read it.

Regards,

Gert-Jan

|||

Thanks..

That helps a lot!!

|||

This is the book you want for 2005:

http://www.microsoft.com/MSPress/books/9615.aspx

It breaks down the querying process both physically and logically, and really looks at how to tune indexes and querys at a level I didn't previously realize was possible. Good read too (if you're into that sort of thing...)

Effect of changing the DBID

I already know that changing the DBID by detaching and re-attaching a
database messes up the full-text catalog, but does it have any other adverse
effects? Do jobs, logins, logins, or anything else depend on the DBID?
Thanks,
Boris Zakharin
Prime Associates, inc
Hi,
As far as I know, There should not be any other issues. The job step stores
the database name
and logins also store the default database name for the login. So no issues
with database id changing.
Thanks
Hari
MCDBA
"Boris Zakharin" <bzakharin@.primeassociates.com> wrote in message
news:#FUWnEnbEHA.1292@.TK2MSFTNGP10.phx.gbl...
> I already know that changing the DBID by detaching and re-attaching a
> database messes up the full-text catalog, but does it have any other
adverse
> effects? Do jobs, logins, logins, or anything else depend on the DBID?
> Thanks,
> Boris Zakharin
> Prime Associates, inc
>

Effect of changing the DBID

I already know that changing the DBID by detaching and re-attaching a
database messes up the full-text catalog, but does it have any other adverse
effects? Do jobs, logins, logins, or anything else depend on the DBID?
Thanks,
Boris Zakharin
Prime Associates, incHi,
As far as I know, There should not be any other issues. The job step stores
the database name
and logins also store the default database name for the login. So no issues
with database id changing.
Thanks
Hari
MCDBA
"Boris Zakharin" <bzakharin@.primeassociates.com> wrote in message
news:#FUWnEnbEHA.1292@.TK2MSFTNGP10.phx.gbl...
> I already know that changing the DBID by detaching and re-attaching a
> database messes up the full-text catalog, but does it have any other
adverse
> effects? Do jobs, logins, logins, or anything else depend on the DBID?
> Thanks,
> Boris Zakharin
> Prime Associates, inc
>

Effect of changing the DBID

I already know that changing the DBID by detaching and re-attaching a
database messes up the full-text catalog, but does it have any other adverse
effects? Do jobs, logins, logins, or anything else depend on the DBID?
Thanks,
Boris Zakharin
Prime Associates, incHi,
As far as I know, There should not be any other issues. The job step stores
the database name
and logins also store the default database name for the login. So no issues
with database id changing.
Thanks
Hari
MCDBA
"Boris Zakharin" <bzakharin@.primeassociates.com> wrote in message
news:#FUWnEnbEHA.1292@.TK2MSFTNGP10.phx.gbl...
> I already know that changing the DBID by detaching and re-attaching a
> database messes up the full-text catalog, but does it have any other
adverse
> effects? Do jobs, logins, logins, or anything else depend on the DBID?
> Thanks,
> Boris Zakharin
> Prime Associates, inc
>

Effect of adding additional CPUs

We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
--
RKIt has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK|||no I do not have preocessor queue issues as of now. Ours is a web applicatio
n
with lot of selects but few insert/updates (hotel reservations). The reason
I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time wil
l
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
RK
"Andrew J. Kelly" wrote:

> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>
>|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...[vbcol=seagreen]
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
>|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
RK
"Paul Cahill" wrote:

> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2
x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off t
wo
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. Th
e
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 1
0
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
>
>

Effect of adding additional CPUs

We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
RK
It has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK
|||no I do not have preocessor queue issues as of now. Ours is a web application
with lot of selects but few insert/updates (hotel reservations). The reason I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time will
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
RK
"Andrew J. Kelly" wrote:

> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>
>
|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...[vbcol=seagreen]
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>
|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
RK
"Paul Cahill" wrote:

> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 10
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
>
>

Wednesday, February 15, 2012

Effect of adding additional CPUs

We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
--
RKIt has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
--
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK|||no I do not have preocessor queue issues as of now. Ours is a web application
with lot of selects but few insert/updates (hotel reservations). The reason I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time will
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
--
RK
"Andrew J. Kelly" wrote:
> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> > Enterprise.
> >
> > What will be the effect on SQL server performance if we add more CPUs ?
> > Are there any calulations (or numbers available) we can do to determine
> > the
> > effect of additional CPUs on SQLl server performace ( no of threads, no of
> > user connections etc)
> > --
> > RK
>
>|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
--
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
>> It has a lot to do with what it is you are doing and how you are doing
>> it.
>> Since that varies so much there is little a std calculation will achieve.
>> With more CPU's you have the potential to handle more individual
>> transactions concurrently. You can also potentially do multi-threaded
>> operations faster such as DBREINDEX and CHECKDB etc. But that does not
>> guarantee it will. Do you have processor queue issues now?
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "RK73" <RK73@.discussions.microsoft.com> wrote in message
>> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
>> > Enterprise.
>> >
>> > What will be the effect on SQL server performance if we add more CPUs ?
>> > Are there any calulations (or numbers available) we can do to determine
>> > the
>> > effect of additional CPUs on SQLl server performace ( no of threads, no
>> > of
>> > user connections etc)
>> > --
>> > RK
>>|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>> no I do not have preocessor queue issues as of now. Ours is a web
>> application
>> with lot of selects but few insert/updates (hotel reservations). The
>> reason I
>> am doing this is for future growth. The product has to be able to support
>> five times the current load. I am guessing increasing the load by 5 time
>> will
>> definately cause CPU queue issues. But Increasing the CPUs- is it a good
>> route to take to handle more load ? Any good reading material on this
>> subject.
>> --
>> RK
>>
>> "Andrew J. Kelly" wrote:
>> It has a lot to do with what it is you are doing and how you are doing
>> it.
>> Since that varies so much there is little a std calculation will
>> achieve.
>> With more CPU's you have the potential to handle more individual
>> transactions concurrently. You can also potentially do multi-threaded
>> operations faster such as DBREINDEX and CHECKDB etc. But that does not
>> guarantee it will. Do you have processor queue issues now?
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "RK73" <RK73@.discussions.microsoft.com> wrote in message
>> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
>> > Enterprise.
>> >
>> > What will be the effect on SQL server performance if we add more CPUs
>> > ?
>> > Are there any calulations (or numbers available) we can do to
>> > determine
>> > the
>> > effect of additional CPUs on SQLl server performace ( no of threads,
>> > no of
>> > user connections etc)
>> > --
>> > RK
>>
>|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
--
RK
"Paul Cahill" wrote:
> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 10
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> > Increasing the CPU's is always a good way to deal with increased load but
> > you need to do a few things first.
> >
> > Make sure all the code and tables are fully optimized. A bad query or
> > lack of index can drag down a good server fast.
> > Make sure you have enough ram to keep the relevant data in cache.
> > Have a properly configured disk subsystem.
> > Set the MAXDOP at the server level to less than the total number of procs
> > to allow concurrent queries in peak times.
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> > "RK73" <RK73@.discussions.microsoft.com> wrote in message
> > news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
> >> no I do not have preocessor queue issues as of now. Ours is a web
> >> application
> >> with lot of selects but few insert/updates (hotel reservations). The
> >> reason I
> >> am doing this is for future growth. The product has to be able to support
> >> five times the current load. I am guessing increasing the load by 5 time
> >> will
> >> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> >> route to take to handle more load ? Any good reading material on this
> >> subject.
> >>
> >> --
> >> RK
> >>
> >>
> >> "Andrew J. Kelly" wrote:
> >>
> >> It has a lot to do with what it is you are doing and how you are doing
> >> it.
> >> Since that varies so much there is little a std calculation will
> >> achieve.
> >> With more CPU's you have the potential to handle more individual
> >> transactions concurrently. You can also potentially do multi-threaded
> >> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> >> guarantee it will. Do you have processor queue issues now?
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> >> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> >> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> >> > Enterprise.
> >> >
> >> > What will be the effect on SQL server performance if we add more CPUs
> >> > ?
> >> > Are there any calulations (or numbers available) we can do to
> >> > determine
> >> > the
> >> > effect of additional CPUs on SQLl server performace ( no of threads,
> >> > no of
> >> > user connections etc)
> >> > --
> >> > RK
> >>
> >>
> >>
> >
> >
>
>

Effect of a SELECT Stored Procedure on tables or database

Hi all,

I have a few stored procedures which all perfom SELECT queries on a table in the database. Do these kind of stored procedures affect any other processes or procedures working on that table. I am talking about locks, blocks etc.

For example, the database has a table which gets updated periodically by some process which I don't know. Now I wrote some stored procedures just to do the SQL SELECT with some conditions in WHERE clause. Is there any possibility that my stored procedure failed and the because of this, the process that runs on the table was not executed?

No, SELECT statements don't cause any kind of locking.|||

And if I don't talk about locking, can I be rest assured that there won't be anything else that can have an effect?

|||

Selects are pretty unobtrusive. I can't think of anything you'd need to worry about.

|||

gt1329a:

No, SELECT statements don't cause any kind of locking.

If your isolation level is read committed, SELECTs do put a shared resource lock but it doesnt block any UPDATEs. Locking is different from Blocking. If reading to-the-minute committed data is not important you can explicitly use NOLOCK.

Effect of "Do not recompute statistics" option

I'm looking into the automatic recompilation of stored procedures and
I have been reading up on the "Do not recompute statistics" option on
indexes.

Am I correct in concluding that disabling the "Do not recompute
statistics" option for an index, will ensure that no automatic
recompilations will occur as a result of updates to data in that
index?

Am I also correct in understanding that the "Update Statistics" will
still update statistics for the index even if the "Do not recompute
statistics" option is disabled?

Regards

BjrnBjrn (bjornsuneandersen@.gmail.com) writes:

Quote:

Originally Posted by

I'm looking into the automatic recompilation of stored procedures and
I have been reading up on the "Do not recompute statistics" option on
indexes.
>
Am I correct in concluding that disabling the "Do not recompute
statistics" option for an index, will ensure that no automatic
recompilations will occur as a result of updates to data in that
index?
>
Am I also correct in understanding that the "Update Statistics" will
still update statistics for the index even if the "Do not recompute
statistics" option is disabled?


That is how I would read it to. I would still prefer to use sp_autostats
turn autostats off/on.

What sort of table are you considering to turn off autostats for? It seems
to me that this mainly is useful with tables that are modest in size, but
which are updated frequently.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx