Friday, February 24, 2012
Electing NAS or SAN?
Hi
NAS is not supported for Clustering, except under very specific conditions.
Go SAN with Windows Hardware Compatibility List products.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Lynce" <ojarana@.msn.com> wrote in message
news:O0z2xHkVFHA.2540@.tk2msftngp13.phx.gbl...
> How is best performance and support.?
>
Sunday, February 19, 2012
efficiently creating random numbers in very large table
I need to sample data in a very large table in SQL Server 2000 (a gazillion rows of Performance Monitor statitics).
I'd like to take the top 5%, for instance, based upon a column containing random numbers.
Can anyone suggest a highly efficient method of populating a column with random numbers.
Thanks in advance.
Rodselect TOP 5 PERCENT * from [YourTable] order by newid()|||select TOP 5 PERCENT * from [YourTable] order by newid()
Thank you, I'll give that a go.
Regards,
Rod|||that won't populate your table with any random numbers obviously.
it will give you a random 5% slice of the table. a different slice each time you run it.|||Thanks, Good point; maybe I can have another column to set a bit , so that I can reproduce. I'll have to test performance, perhaps someone has some experience with this or have a different technique to propose. Thank you.
Rod|||If you really want a column of random values, then just create a GUID column with a default of NEWID(). But this won't give you a random sample every time, of course.|||If you really want a column of random values, then just create a GUID column with a default of NEWID(). But this won't give you a random sample every time, of course.
That's ok blindman, I just neede something that's efficient in terms populating random values. Regards, Rod|||just create a GUID column with a default of NEWID()
Ofcourse this works but if your table is really that big beware of the time it takes to alter the table! SQL Server has to expand each record so numerous page splits will occur, indexes will have to be rebuild, etc, etc. This could take a couple of hours.|||Ofcourse this works but if your table is really that big beware of the time it takes to alter the table! SQL Server has to expand each record so numerous page splits will occur, indexes will have to be rebuild, etc, etc. This could take a couple of hours.
...ugh.. Thanks. There does not seem to be a really efficient way of doing this...
Thanks for you input. Rod|||how many rows is the table?
also, you can generate random numbers in sql using rand() if you don't like guids. if a random number from 0-255 is sufficient you could store it in a tinyint and less page splits would result.
this code ran in 31 sec on my dev box. not great, but it is what it is:
set nocount on
declare @.t table (RandomColumn tinyint)
declare @.i int
set @.i=0
while @.i < 1000000
begin
insert into @.t select round(rand() * 255, 0)
set @.i = @.i + 1
end|||how many rows is the table?
also, you can generate random numbers in sql using rand() if you don't like guids. if a random number from 0-255 is sufficient you could store it in a tinyint and less page splits would result.
this code ran in 31 sec on my dev box. not great, but it is what it is:
set nocount on
declare @.t table (RandomColumn tinyint)
declare @.i int
set @.i=0
while @.i < 1000000
begin
insert into @.t select round(rand() * 255, 0)
set @.i = @.i + 1
end
That maybe ok, you're right, not great but maybe we can live that. Thanks for your code.
Regards,
Rod
efficient SQL profiling
Is there a way to efficiently monitor SQL performance using SQL Profiler,
and my meaning is to see distinct SQL query sent to server (by ignoring
parameter values sent with the queries)
this mean that query like
select * from users where user_id = 10
and
select * from users where user_id = 20
will be shown as the same query counted 2 times, and this way see which
query is performed mostly on the server, and priorities it's performance
tuning.
creative suggestions invited.
thanx.If it is simply the where clause that you are trying to eliminate, then why
not log all your queries to a SQL table and then do a select distinct from
the table, and get the substring of the query that does not include the
where clause?
"z. f." <zigi@.info-scopeREMSPAM.co.il> wrote in message
news:Od5S7Gh%23DHA.3032@.TK2MSFTNGP10.phx.gbl...
> Hi,
> Is there a way to efficiently monitor SQL performance using SQL Profiler,
> and my meaning is to see distinct SQL query sent to server (by ignoring
> parameter values sent with the queries)
> this mean that query like
> select * from users where user_id = 10
> and
> select * from users where user_id = 20
> will be shown as the same query counted 2 times, and this way see which
> query is performed mostly on the server, and priorities it's performance
> tuning.
> creative suggestions invited.
> thanx.
>
>|||Thanx,
2 points:
1. how do i log all my queries to the database in an encapsulated way?
1.1 can i also log this way the time it took to execute?
2. my buttleneck might also be a execute statement - well, this will go also
with your suggestion, just truncated before the starting '('.
"Aaron Relph" <x@.x.com> wrote in message
news:OAEHgVh%23DHA.3436@.tk2msftngp13.phx.gbl...
> If it is simply the where clause that you are trying to eliminate, then
why
> not log all your queries to a SQL table and then do a select distinct from
> the table, and get the substring of the query that does not include the
> where clause?
> "z. f." <zigi@.info-scopeREMSPAM.co.il> wrote in message
> news:Od5S7Gh%23DHA.3032@.TK2MSFTNGP10.phx.gbl...
Profiler,
>|||This may help -
http://www.sql-server-performance.c...ofiler_tips.asp
Ray Higdon MCSE, MCDBA, CCNA
--
"z. f." <zigi@.info-scopeREMSPAM.co.il> wrote in message
news:OQuACAi%23DHA.3292@.TK2MSFTNGP11.phx.gbl...
> Thanx,
> 2 points:
> 1. how do i log all my queries to the database in an encapsulated way?
> 1.1 can i also log this way the time it took to execute?
> 2. my buttleneck might also be a execute statement - well, this will go
also
> with your suggestion, just truncated before the starting '('.
>
>
> "Aaron Relph" <x@.x.com> wrote in message
> news:OAEHgVh%23DHA.3436@.tk2msftngp13.phx.gbl...
> why
from
> Profiler,
ignoring
which
performance
>
Friday, February 17, 2012
Efficiency problems using PRINT in CURSORs
I'm having a CURSOR running through ~45000 entries in a table. If I, for
example, use PRINT inside the cursor, can that be a performance hit? Are
there other performance things to think about when using cursors?
Thanks,
Mats-LennartCursors in general are not recommended. If you could discuss more about why
you are using cursors there could be alternate solutions that this newsgroup
can provide you with.
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"Mats-Lennart Hansson" <ap_skallen@.hotmail.com> wrote in message
news:ePBVg2ASGHA.5736@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I'm having a CURSOR running through ~45000 entries in a table. If I, for
> example, use PRINT inside the cursor, can that be a performance hit? Are
> there other performance things to think about when using cursors?
> Thanks,
> Mats-Lennart
>|||Thanks for your answer.
I know that it's not recommended, but in this case there is no other
(managable) solution. There is also no time for a redesign, so this is the
solution that will be used. However, I still wonder:
Can a lot of PRINTs be a performance problem? Are there other things that
can decrease performance, like user defined functions?
Thanks,
Mats-Lennart
"SriSamp" <ssampath@.sct.co.in> wrote in message
news:epxWI9ASGHA.5900@.tk2msftngp13.phx.gbl...
> Cursors in general are not recommended. If you could discuss more about
> why you are using cursors there could be alternate solutions that this
> newsgroup can provide you with.
> --
> HTH,
> SriSamp
> Email: srisamp@.gmail.com
> Blog: http://blogs.sqlxml.org/srinivassampath
> URL: http://www32.brinkster.com/srisamp
> "Mats-Lennart Hansson" <ap_skallen@.hotmail.com> wrote in message
> news:ePBVg2ASGHA.5736@.TK2MSFTNGP10.phx.gbl...
>|||Mats-Lennart Hansson wrote:
> Thanks for your answer.
> I know that it's not recommended, but in this case there is no other
> (managable) solution. There is also no time for a redesign, so this is the
> solution that will be used. However, I still wonder:
> Can a lot of PRINTs be a performance problem?
Of course. I don't know why you would use PRINT in a production system
however. PRINT is typically just debug code or for ad hoc stuff. Does
it matter if it's too late to change anyway? If performance is your
concern then PRINT may be insignificant next to the overhead of using a
cursor.
> Are there other things that
> can decrease performance, like user defined functions?
Generally speaking the more you do in a cursor loop the more processing
is required. One reason to prefer set-based solutions rather than
cursors is that the same isn't always true in declarative code -
performance doesn't necessarily degrade in line with complexity.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||Hi,
I have no intentions in having a lot of PRINTs in our production system, why
I'm wondering is simply because of curiosity. Can I expect a performance
boost when going into production without PRINT's compared to now, when I'm
in development? Are there other things to do to improve efficiency? These
are questions I'm interested in. I want to learn from my mistakes to improve
my future projects :)
Why I'm using CURSORs in this project is because we are converting data from
one database into another. Unfortunately, the original data is not
consistent and needs to be checked before being converted. Of course there
are other solutions than using CURSORs, but this seemed to be the most
straightforward way.
Instead of looping through 50000 entries, can it be more efficient to run
through 5000 at a time, opening and closening the cursor in between? Could
this reduce some "overhead" costs?
Thanks for replying,
Mats-Lennart
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1142425512.174670.273740@.i39g2000cwa.googlegroups.com...
> Mats-Lennart Hansson wrote:
> Of course. I don't know why you would use PRINT in a production system
> however. PRINT is typically just debug code or for ad hoc stuff. Does
> it matter if it's too late to change anyway? If performance is your
> concern then PRINT may be insignificant next to the overhead of using a
> cursor.
>
> Generally speaking the more you do in a cursor loop the more processing
> is required. One reason to prefer set-based solutions rather than
> cursors is that the same isn't always true in declarative code -
> performance doesn't necessarily degrade in line with complexity.
> --
> David Portas, SQL Server MVP
> Whenever possible please post enough code to reproduce your problem.
> Including CREATE TABLE and INSERT statements usually helps.
> State what version of SQL Server you are using and specify the content
> of any error messages.
> SQL Server Books Online:
> http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
> --
>
Effects of using Transaction in SP on Performance?
I have a web application which is being used at large scale and there are
upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
changed all the transactional Stored Procedure and used Transaction into
them. Now, I am thinking about the performance of the application when I use
d
Transaction into Stored Procedures which are being used by each user very
frequently. Transaction Lock the objects which might hurt the application
performance. Please, comments on it and should I use Transaction now or not?
Thanks.
Essa, M. Mughal
Software Developer
CanadaYou should DEFINITELY use transactions. Performance penalty or not. There
is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
DATABASE. Sorry for the all caps, but I had to drive it home. Do not
sacrifice your data quality. Otherwise there is very little reason to even
be using a database.
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Essa" <essamughal@.hotmail.com> wrote in message
news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> Hi All;
> I have a web application which is being used at large scale and there are
> upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
> Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
> changed all the transactional Stored Procedure and used Transaction into
> them. Now, I am thinking about the performance of the application when I
used
> Transaction into Stored Procedures which are being used by each user very
> frequently. Transaction Lock the objects which might hurt the application
> performance. Please, comments on it and should I use Transaction now or
not?
> Thanks.
> --
> Essa, M. Mughal
> Software Developer
> Canada|||Hi Adam;
Thanks for your strong recommendataion. I really appreciate your way of
conveying me the importance of Data. I have already changed all the
transactional stored procedure but I was just wondering so now I'll go live
with them and then see what happens. I hope nothing will happen but it will
increase data integrity.
Thanks
"Adam Machanic" wrote:
> You should DEFINITELY use transactions. Performance penalty or not. Ther
e
> is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN
A
> DATABASE. Sorry for the all caps, but I had to drive it home. Do not
> sacrifice your data quality. Otherwise there is very little reason to eve
n
> be using a database.
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.datamanipulation.net
> --
>
> "Essa" <essamughal@.hotmail.com> wrote in message
> news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> used
> not?
>
>
Effects of using Transaction in SP on Performance?
I have a web application which is being used at large scale and there are
upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
changed all the transactional Stored Procedure and used Transaction into
them. Now, I am thinking about the performance of the application when I used
Transaction into Stored Procedures which are being used by each user very
frequently. Transaction Lock the objects which might hurt the application
performance. Please, comments on it and should I use Transaction now or not?
Thanks.
Essa, M. Mughal
Software Developer
Canada
You should DEFINITELY use transactions. Performance penalty or not. There
is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
DATABASE. Sorry for the all caps, but I had to drive it home. Do not
sacrifice your data quality. Otherwise there is very little reason to even
be using a database.
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
"Essa" <essamughal@.hotmail.com> wrote in message
news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> Hi All;
> I have a web application which is being used at large scale and there are
> upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
> Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
> changed all the transactional Stored Procedure and used Transaction into
> them. Now, I am thinking about the performance of the application when I
used
> Transaction into Stored Procedures which are being used by each user very
> frequently. Transaction Lock the objects which might hurt the application
> performance. Please, comments on it and should I use Transaction now or
not?
> Thanks.
> --
> Essa, M. Mughal
> Software Developer
> Canada
|||Hi Adam;
Thanks for your strong recommendataion. I really appreciate your way of
conveying me the importance of Data. I have already changed all the
transactional stored procedure but I was just wondering so now I'll go live
with them and then see what happens. I hope nothing will happen but it will
increase data integrity.
Thanks
"Adam Machanic" wrote:
> You should DEFINITELY use transactions. Performance penalty or not. There
> is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
> DATABASE. Sorry for the all caps, but I had to drive it home. Do not
> sacrifice your data quality. Otherwise there is very little reason to even
> be using a database.
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.datamanipulation.net
> --
>
> "Essa" <essamughal@.hotmail.com> wrote in message
> news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> used
> not?
>
>
Effects of a Database restoration.
Does a database restoration perform any
update-statistics/defragmentation by default. We observed a marked
improvement in performance when we restored the a database from an high
end machine to a low machine.
Could someone shed some light on this.
Regards,
Thyagarajan Delli.Thyagu (tdelli@.gmail.com) writes:
> Does a database restoration perform any
> update-statistics/defragmentation by default.
As far as I know, no. Except, that if the MDF is very fragmented on the
source machine, and the target machine has space to accept it as
contiguous, you will see defragmentation on that level.
> We observed a marked improvement in performance when we restored the a
> database from an high end machine to a low machine.
Maybe the low-end machine has a single CPU? SQL Server sometimes goes
for parallel plans on multi-CPU machines that are not very efficient
at all.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
Effect of Shrinking a DB
~piroGenerally all shrink does it reduce the size of the database files, no real performance gain as far as I know. You have to ask your self why did they grow to that size in the first place, before shrinking them as it is a performance hit when they have to grow again.|||Thank you for replying.
So there really is no performance decrease when I shrink the database. The performance hit will come from db growth, which is normal. Do you know of any other commands or functions that are similar to Access's compact command?
~piro|||Not sure what Access's compact command does, but DBCC SHRINKDATABASE can move all data to contiguous pages and remove any remaining unused space. This has the potential of reducing I/O since you would need to read fewer pages to access all data.|||Good point, forgot about the moving of data pages.
effect of more column in where clause
what are the effects of having number of columns in WHERE clause, ie: if we use more columns in where clause, what will be its impact on performance.
this is important for me to design queries
having a lot of unneccessary where clauses will surely slowdown the query if there are.
the best thing that you can do is to streamline the logic into its simplest form
|||Perhaps I misunderstand the question; my normal experience is that if the additional columns contribute to filtering out unwanted records that the additional columns in the where clause usually enhance performance.|||yeah thats right but a poorly written where clause will definetely slowdown performance.
The trick is to use the best suited functions and keywords in the where clause
here some useful link:
http://www.sql-server-performance.com/transact_sql.asp
regards,
joey
|||Hi joeydj,
what do you mean by "the trick is to use the best suited where clause by making use of the powerful sql server functions"?
AMB
|||oh sorry. got a grammar problem
here i have it corrected
"The trick is to use the best suited functions and keywords in the where clause"
hmmm... thats better..
example:
select ... from
|||where x=1 and x=2 and x=7 and x=9
maybe written as
where x in (1,2,7,9)
Joey is normally better than that; please cut him some slack. I think what he means is
select ... from
where x=1 OR x=2 OR x=7 OR x=9
maybe written as
where x in (1,2,7,9)
|||
hahaha. not been here for sometime.
hmmm thanks kent.
|||Not to mention that there is no performance benefit in choosing one of these as opposed to the other, anyway. (There is a readability benefit, and the two forms will behave differently if you carelessly tack on AND <another condition> to the query.
Steve Kass
Drew University
http://www.stevekass.com
|||Can you give an example of the choices you have to make? Usually if you add "more columns in where clause", you change the meaning of the query (but not always), and the first goal of designing a query is for it to ask the right question...
Steve Kass
Drew University
http://www.stevekass.com
|||One thing that people sometimes forget is that a full table scan can sometimes be better than an indexed scan. If you are ultimately going to read every block of data from disk via an index scan then you might want to forgo the index. Modern databases with good statistics can usually establish an execution plan that is "good enough" but it can still pay to understand your data.
Here is a quick example.
Let's say I have a database of people and for some reason their geneder in overwhelmingly biased in one direction. An optimizer might look at the number of unique values for gender and assume a 50/50 split in the data. It might make good sence to always use an index on gender to access the rows of the table from the optimizer's best guess. However, that might not be the case in practice.
Let's say the table is 100k records with :
90% F
10% M
To find the men an indexed lookup on gender is probably a good thing. On the other hand an indexed lookup to find women will result in more disk IO and slower performance than a full table scan (not counting your network).
Knowing this distribution ahead of time might lead someone looking for all women to do something like
Select * from employees where gender + '' = 'F'
|||thanks for all your views.
let me give u a specific example.
my database has "branch name" field in all the tables. and we have seperate copy of database for each branch. so a particular branch user will connect to his branch, which has records only for that branch.
in this situation, there is no need to filter the records again with "branch name", but if there is no performance issue , i wish to include it in the WHERE clause to be 100% sure that all the records that the query output does have the same branch name.
|||In your situation. I think you could add "branch name" column without perfomance issue. But you need to create index or statictics for this column. As a result query optimizer understands that "branch name" same for all records and doesn't use them for plan.
But it any case your could check query plans and only after this decide
|||This is a second good example of what I am talking about.
If your database is physically segragated by branch already then you want to make sure you know what the optimizer is doing when you add a "failsafe"
Where BranchId = 10
or alternatively
Where BranchName = 'Downtown'
to your queries.
If adding that clause causes the optimizer to include an index on BranchId as part of the execution plan then you will hurt your performance as a result of extra disk i/o and memory use. This might or might not be a concern for your infrastructure. Though it sounds like if you are physically partitioning your database by branch then you might be concerned about database performance.
Although it might seem counter intuative, it might be a good idea to experiment with:
Where BranchId + 0 = 10
or alternatively
Where BranchName + '' = 'Downtown'
This would ensure than no index on BranchId could be used in the execution plan.
|||Dear AMERMSAMER,
The bottom line is that having more columns in the WHERE clause does not necessarily slow down performance and may, in fact, actually improve performance. It all depends on the indexes on the table and the "selectivity" of the columns in the WHERE. If there is a unique key and all of the columns in the key are "covered" by the WHERE clause, no more may be needed. SQL will try to optimize the query by using these columns where available.
Effect of adding additional CPUs
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
--
RKIt has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK|||no I do not have preocessor queue issues as of now. Ours is a web applicatio
n
with lot of selects but few insert/updates (hotel reservations). The reason
I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time wil
l
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
RK
"Andrew J. Kelly" wrote:
> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>
>|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...[vbcol=seagreen]
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
>|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
RK
"Paul Cahill" wrote:
> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2
x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off t
wo
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. Th
e
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 1
0
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
>
>
Effect of adding additional CPUs
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
RK
It has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK
|||no I do not have preocessor queue issues as of now. Ours is a web application
with lot of selects but few insert/updates (hotel reservations). The reason I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time will
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
RK
"Andrew J. Kelly" wrote:
> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>
>
|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...[vbcol=seagreen]
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>
|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
RK
"Paul Cahill" wrote:
> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 10
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
>
>
Wednesday, February 15, 2012
Effect of adding additional CPUs
Enterprise.
What will be the effect on SQL server performance if we add more CPUs ?
Are there any calulations (or numbers available) we can do to determine the
effect of additional CPUs on SQLl server performace ( no of threads, no of
user connections etc)
--
RKIt has a lot to do with what it is you are doing and how you are doing it.
Since that varies so much there is little a std calculation will achieve.
With more CPU's you have the potential to handle more individual
transactions concurrently. You can also potentially do multi-threaded
operations faster such as DBREINDEX and CHECKDB etc. But that does not
guarantee it will. Do you have processor queue issues now?
--
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> Enterprise.
> What will be the effect on SQL server performance if we add more CPUs ?
> Are there any calulations (or numbers available) we can do to determine
> the
> effect of additional CPUs on SQLl server performace ( no of threads, no of
> user connections etc)
> --
> RK|||no I do not have preocessor queue issues as of now. Ours is a web application
with lot of selects but few insert/updates (hotel reservations). The reason I
am doing this is for future growth. The product has to be able to support
five times the current load. I am guessing increasing the load by 5 time will
definately cause CPU queue issues. But Increasing the CPUs- is it a good
route to take to handle more load ? Any good reading material on this
subject.
--
RK
"Andrew J. Kelly" wrote:
> It has a lot to do with what it is you are doing and how you are doing it.
> Since that varies so much there is little a std calculation will achieve.
> With more CPU's you have the potential to handle more individual
> transactions concurrently. You can also potentially do multi-threaded
> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> guarantee it will. Do you have processor queue issues now?
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> > Enterprise.
> >
> > What will be the effect on SQL server performance if we add more CPUs ?
> > Are there any calulations (or numbers available) we can do to determine
> > the
> > effect of additional CPUs on SQLl server performace ( no of threads, no of
> > user connections etc)
> > --
> > RK
>
>|||Increasing the CPU's is always a good way to deal with increased load but
you need to do a few things first.
Make sure all the code and tables are fully optimized. A bad query or lack
of index can drag down a good server fast.
Make sure you have enough ram to keep the relevant data in cache.
Have a properly configured disk subsystem.
Set the MAXDOP at the server level to less than the total number of procs to
allow concurrent queries in peak times.
--
Andrew J. Kelly SQL MVP
"RK73" <RK73@.discussions.microsoft.com> wrote in message
news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
> no I do not have preocessor queue issues as of now. Ours is a web
> application
> with lot of selects but few insert/updates (hotel reservations). The
> reason I
> am doing this is for future growth. The product has to be able to support
> five times the current load. I am guessing increasing the load by 5 time
> will
> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> route to take to handle more load ? Any good reading material on this
> subject.
> --
> RK
>
> "Andrew J. Kelly" wrote:
>> It has a lot to do with what it is you are doing and how you are doing
>> it.
>> Since that varies so much there is little a std calculation will achieve.
>> With more CPU's you have the potential to handle more individual
>> transactions concurrently. You can also potentially do multi-threaded
>> operations faster such as DBREINDEX and CHECKDB etc. But that does not
>> guarantee it will. Do you have processor queue issues now?
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "RK73" <RK73@.discussions.microsoft.com> wrote in message
>> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
>> > Enterprise.
>> >
>> > What will be the effect on SQL server performance if we add more CPUs ?
>> > Are there any calulations (or numbers available) we can do to determine
>> > the
>> > effect of additional CPUs on SQLl server performace ( no of threads, no
>> > of
>> > user connections etc)
>> > --
>> > RK
>>|||What server do you have? We were running 500 users on a Dell 8450 with 4
900Mhz cpus.
It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
3.2ghz hyperthreading cpu's.
The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
perc3/dc controllers.
The compaq has only 6 disks but just wipes the floor with the old dell.
While I've had the dell down I tried replacing the Dell Perc3's with the
latest compaq controllers. Dispite the controllers being U320 they have to
run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
sustained disk throughput increased by a factor of 2 both for reads and
writes. I was never happy with the perc3's but now the compaq controllers
proved the point. Note also that dell array manager does not set up raid 10
correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
have to set up the raid 10 arrays through the dell controller bios to
achieve true raid 10.
Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
channels (dual channel controllers). Write back cache enabled. all stats
unbuffered by windows.
Test tool Sisoft Sandra.
Perc3/DC. Set up by array manager
Sequential Read 33 Mb/sec
Random Read 11 Mb/sec
Sequential Write 22 Mb/sec
Random Write 19 Mb/sec
Perc3/DC. Set up bios
Sequential Read 120 Mb/sec
Random Read 115 Mb/sec
Sequential Write 27 Mb/sec
Random Write 25 Mb/sec
Compaq Smart Array 6402
Sequential Read 228 Mb/sec
Random Read 161 Mb/sec
Sequential Write 47 Mb/sec
Random Write 36 Mb/sec
Shocking.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> Increasing the CPU's is always a good way to deal with increased load but
> you need to do a few things first.
> Make sure all the code and tables are fully optimized. A bad query or
> lack of index can drag down a good server fast.
> Make sure you have enough ram to keep the relevant data in cache.
> Have a properly configured disk subsystem.
> Set the MAXDOP at the server level to less than the total number of procs
> to allow concurrent queries in peak times.
> --
> Andrew J. Kelly SQL MVP
>
> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
>> no I do not have preocessor queue issues as of now. Ours is a web
>> application
>> with lot of selects but few insert/updates (hotel reservations). The
>> reason I
>> am doing this is for future growth. The product has to be able to support
>> five times the current load. I am guessing increasing the load by 5 time
>> will
>> definately cause CPU queue issues. But Increasing the CPUs- is it a good
>> route to take to handle more load ? Any good reading material on this
>> subject.
>> --
>> RK
>>
>> "Andrew J. Kelly" wrote:
>> It has a lot to do with what it is you are doing and how you are doing
>> it.
>> Since that varies so much there is little a std calculation will
>> achieve.
>> With more CPU's you have the potential to handle more individual
>> transactions concurrently. You can also potentially do multi-threaded
>> operations faster such as DBREINDEX and CHECKDB etc. But that does not
>> guarantee it will. Do you have processor queue issues now?
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "RK73" <RK73@.discussions.microsoft.com> wrote in message
>> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
>> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
>> > Enterprise.
>> >
>> > What will be the effect on SQL server performance if we add more CPUs
>> > ?
>> > Are there any calulations (or numbers available) we can do to
>> > determine
>> > the
>> > effect of additional CPUs on SQLl server performace ( no of threads,
>> > no of
>> > user connections etc)
>> > --
>> > RK
>>
>|||Thank you both your comments.
I have SQL Server 2k/ Windows 2003 EE on DELL PE 6850, 8 GB RAM, 4x3.33Ghz
hyperthreaded CPUs two DELL PERC 4/DC controllers. Curreently i am zooming
along. It is only for future I am worried about.
Thanks for the true RAID 10 tip.
--
RK
"Paul Cahill" wrote:
> What server do you have? We were running 500 users on a Dell 8450 with 4
> 900Mhz cpus.
> It failed a month ago and we had to quickly swap in a modern Compaq with 2 x
> 3.2ghz hyperthreading cpu's.
> The dell had 24 disks running raid 10 arrays in two split bus PV200s off two
> perc3/dc controllers.
> The compaq has only 6 disks but just wipes the floor with the old dell.
> While I've had the dell down I tried replacing the Dell Perc3's with the
> latest compaq controllers. Dispite the controllers being U320 they have to
> run U160 as the 15K rpm Fujitsu drives in the dell are a few years old. The
> sustained disk throughput increased by a factor of 2 both for reads and
> writes. I was never happy with the perc3's but now the compaq controllers
> proved the point. Note also that dell array manager does not set up raid 10
> correctly as a stripe of mirrors. It sets it up as a span of mirrors. You
> have to set up the raid 10 arrays through the dell controller bios to
> achieve true raid 10.
> Stats, all on a dell 8450, Windows 2003, Raid 10 on 8 disks over two
> channels (dual channel controllers). Write back cache enabled. all stats
> unbuffered by windows.
> Test tool Sisoft Sandra.
> Perc3/DC. Set up by array manager
> Sequential Read 33 Mb/sec
> Random Read 11 Mb/sec
> Sequential Write 22 Mb/sec
> Random Write 19 Mb/sec
> Perc3/DC. Set up bios
> Sequential Read 120 Mb/sec
> Random Read 115 Mb/sec
> Sequential Write 27 Mb/sec
> Random Write 25 Mb/sec
> Compaq Smart Array 6402
> Sequential Read 228 Mb/sec
> Random Read 161 Mb/sec
> Sequential Write 47 Mb/sec
> Random Write 36 Mb/sec
> Shocking.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23fiU6crFHA.2540@.TK2MSFTNGP09.phx.gbl...
> > Increasing the CPU's is always a good way to deal with increased load but
> > you need to do a few things first.
> >
> > Make sure all the code and tables are fully optimized. A bad query or
> > lack of index can drag down a good server fast.
> > Make sure you have enough ram to keep the relevant data in cache.
> > Have a properly configured disk subsystem.
> > Set the MAXDOP at the server level to less than the total number of procs
> > to allow concurrent queries in peak times.
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> > "RK73" <RK73@.discussions.microsoft.com> wrote in message
> > news:1B595AAC-B34F-4414-8AF4-C6F664171AF0@.microsoft.com...
> >> no I do not have preocessor queue issues as of now. Ours is a web
> >> application
> >> with lot of selects but few insert/updates (hotel reservations). The
> >> reason I
> >> am doing this is for future growth. The product has to be able to support
> >> five times the current load. I am guessing increasing the load by 5 time
> >> will
> >> definately cause CPU queue issues. But Increasing the CPUs- is it a good
> >> route to take to handle more load ? Any good reading material on this
> >> subject.
> >>
> >> --
> >> RK
> >>
> >>
> >> "Andrew J. Kelly" wrote:
> >>
> >> It has a lot to do with what it is you are doing and how you are doing
> >> it.
> >> Since that varies so much there is little a std calculation will
> >> achieve.
> >> With more CPU's you have the potential to handle more individual
> >> transactions concurrently. You can also potentially do multi-threaded
> >> operations faster such as DBREINDEX and CHECKDB etc. But that does not
> >> guarantee it will. Do you have processor queue issues now?
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "RK73" <RK73@.discussions.microsoft.com> wrote in message
> >> news:B616692D-BB49-4038-9441-EE9F0756ED67@.microsoft.com...
> >> > We have 4 CPU 8GB Dell server running SQL2k Enterprise on Windows 2003
> >> > Enterprise.
> >> >
> >> > What will be the effect on SQL server performance if we add more CPUs
> >> > ?
> >> > Are there any calulations (or numbers available) we can do to
> >> > determine
> >> > the
> >> > effect of additional CPUs on SQLl server performace ( no of threads,
> >> > no of
> >> > user connections etc)
> >> > --
> >> > RK
> >>
> >>
> >>
> >
> >
>
>