Friday, February 17, 2012
Effects of using Transaction in SP on Performance?
I have a web application which is being used at large scale and there are
upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
changed all the transactional Stored Procedure and used Transaction into
them. Now, I am thinking about the performance of the application when I use
d
Transaction into Stored Procedures which are being used by each user very
frequently. Transaction Lock the objects which might hurt the application
performance. Please, comments on it and should I use Transaction now or not?
Thanks.
Essa, M. Mughal
Software Developer
CanadaYou should DEFINITELY use transactions. Performance penalty or not. There
is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
DATABASE. Sorry for the all caps, but I had to drive it home. Do not
sacrifice your data quality. Otherwise there is very little reason to even
be using a database.
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Essa" <essamughal@.hotmail.com> wrote in message
news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> Hi All;
> I have a web application which is being used at large scale and there are
> upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
> Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
> changed all the transactional Stored Procedure and used Transaction into
> them. Now, I am thinking about the performance of the application when I
used
> Transaction into Stored Procedures which are being used by each user very
> frequently. Transaction Lock the objects which might hurt the application
> performance. Please, comments on it and should I use Transaction now or
not?
> Thanks.
> --
> Essa, M. Mughal
> Software Developer
> Canada|||Hi Adam;
Thanks for your strong recommendataion. I really appreciate your way of
conveying me the importance of Data. I have already changed all the
transactional stored procedure but I was just wondering so now I'll go live
with them and then see what happens. I hope nothing will happen but it will
increase data integrity.
Thanks
"Adam Machanic" wrote:
> You should DEFINITELY use transactions. Performance penalty or not. Ther
e
> is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN
A
> DATABASE. Sorry for the all caps, but I had to drive it home. Do not
> sacrifice your data quality. Otherwise there is very little reason to eve
n
> be using a database.
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.datamanipulation.net
> --
>
> "Essa" <essamughal@.hotmail.com> wrote in message
> news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> used
> not?
>
>
Effects of using Transaction in SP on Performance?
I have a web application which is being used at large scale and there are
upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
changed all the transactional Stored Procedure and used Transaction into
them. Now, I am thinking about the performance of the application when I used
Transaction into Stored Procedures which are being used by each user very
frequently. Transaction Lock the objects which might hurt the application
performance. Please, comments on it and should I use Transaction now or not?
Thanks.
Essa, M. Mughal
Software Developer
Canada
You should DEFINITELY use transactions. Performance penalty or not. There
is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
DATABASE. Sorry for the all caps, but I had to drive it home. Do not
sacrifice your data quality. Otherwise there is very little reason to even
be using a database.
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
"Essa" <essamughal@.hotmail.com> wrote in message
news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> Hi All;
> I have a web application which is being used at large scale and there are
> upto 10,000 entries per day. Currently, there is no Begin Tran/Commit
> Tran/Rollback Tran and b/c of that we found some data corruption. Now, I
> changed all the transactional Stored Procedure and used Transaction into
> them. Now, I am thinking about the performance of the application when I
used
> Transaction into Stored Procedures which are being used by each user very
> frequently. Transaction Lock the objects which might hurt the application
> performance. Please, comments on it and should I use Transaction now or
not?
> Thanks.
> --
> Essa, M. Mughal
> Software Developer
> Canada
|||Hi Adam;
Thanks for your strong recommendataion. I really appreciate your way of
conveying me the importance of Data. I have already changed all the
transactional stored procedure but I was just wondering so now I'll go live
with them and then see what happens. I hope nothing will happen but it will
increase data integrity.
Thanks
"Adam Machanic" wrote:
> You should DEFINITELY use transactions. Performance penalty or not. There
> is no choice here. Data integrity is, by far, the MOST IMPORTANT THING IN A
> DATABASE. Sorry for the all caps, but I had to drive it home. Do not
> sacrifice your data quality. Otherwise there is very little reason to even
> be using a database.
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.datamanipulation.net
> --
>
> "Essa" <essamughal@.hotmail.com> wrote in message
> news:46BFC928-7375-47ED-AA79-73FE92A45843@.microsoft.com...
> used
> not?
>
>
Effects of Intel Hyperthreading on SQL Server
Thank you,
BryanNo deleterious effects. Just makes one processor appear as two
(per-processor licensing is based on physical processors, not virtual).
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>|||Works pretty good most of the time. There are a few hotfixes that address
some obscure issues on highly scaled machines (8+ physical processors) but
those only apply to a very few systems. OLTP environments work very well on
HT boxes. I do limit the maximum dogree of parallelism to the actual
physical processor count to avoid over-saturating the box on a single query.
Geoff N. Hiten
Microsoft SQL Server MVP
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>|||Consider limiting the number of processors user per query to max same number
as your physical
processors. Also, if you get bad query plans, investigate the MAXDOP query h
int.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"BDB" <reply@.to.group.com> wrote in message news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...[
vbcol=seagreen]
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>[/vbcol]|||BDB wrote:
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
SQL Server 2000 SP3 was made "aware" of hyperthreading and will
distinguish between a logical and physical processor when deciding on
parallel plans.
David Gugick
Imceda Software
www.imceda.com|||In article <O#iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl>, reply@.to.group.com
says...
> Can anyone share their knowledge of the subject?
We have single, dual and Quad Xeon CPU servers with MS SQL 2000 on them.
When HT is enabled we see 0% to 28% increase in performance for SOME
tasks, but not all.
The real performance increase is in reindexing the tables on a weekly or
monthly basis, at least for our applications were more than 2mm records
are imported every night m-f.
What I see with HT is that you "can" performance tune your SQL server
for it, but on a Single CPU Xeon server with HT enabled there seems to
be little difference, the most difference was on the Duals and the least
difference was on the Quads.
--
spam999free@.rrohio.com
remove 999 in order to email me|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> BDB wrote:
> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
> distinguish between a logical and physical processor when deciding on
> parallel plans.
> --
> David Gugick
> Imceda Software
> www.imceda.com
David,
Thanks for your reply.
I've looked for an article on this topic for SP3. Can you specify a link or
KB #?
Thank you,
Bryan|||BDB wrote:
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> David,
> Thanks for your reply.
> I've looked for an article on this topic for SP3. Can you specify a
> link or KB #?
> Thank you,
> Bryan
I'm afraid the information came from the SQL Server group. I don't know
if it was every documented, although I suspect it's somewhere on the MS
site. But here's a start:
http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
http://support.microsoft.com/kb/835864
David Gugick
Imceda Software
www.imceda.com|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:OXp3rIzWFHA.3620@.TK2MSFTNGP09.phx.gbl...
> BDB wrote:
> I'm afraid the information came from the SQL Server group. I don't know if
> it was every documented, although I suspect it's somewhere on the MS site.
> But here's a start:
> http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
> http://support.microsoft.com/kb/835864
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
Thanks again.|||I did some testing on HT processors using dual and quad cpu boxes (i.e. phys
ical chips) with 500 and 1000 OLTP user loads. The HT processors added aro
und 30% and in some cases even more performance. However, this was a strict
ly OLTP application with ve
ry targeted SELECT statements and oodles of single record INSERT, UPDATE, an
d DELETE statements. All in all, the HT processors provided a nice boost.
-Kev
~~~
-Kevin Kline
Quest Software (www.quest.com)
SQL Server MVP
I support PASS, the Professional Association for SQL Server. ([url]www.sqlpass.org[/url
])
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
Effects of Intel Hyperthreading on SQL Server
Thank you,
Bryan
No deleterious effects. Just makes one processor appear as two
(per-processor licensing is based on physical processors, not virtual).
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>
|||Works pretty good most of the time. There are a few hotfixes that address
some obscure issues on highly scaled machines (8+ physical processors) but
those only apply to a very few systems. OLTP environments work very well on
HT boxes. I do limit the maximum dogree of parallelism to the actual
physical processor count to avoid over-saturating the box on a single query.
Geoff N. Hiten
Microsoft SQL Server MVP
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>
|||Consider limiting the number of processors user per query to max same number as your physical
processors. Also, if you get bad query plans, investigate the MAXDOP query hint.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"BDB" <reply@.to.group.com> wrote in message news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>
|||BDB wrote:
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
SQL Server 2000 SP3 was made "aware" of hyperthreading and will
distinguish between a logical and physical processor when deciding on
parallel plans.
David Gugick
Imceda Software
www.imceda.com
|||In article <O#iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl>, reply@.to.group.com
says...
> Can anyone share their knowledge of the subject?
We have single, dual and Quad Xeon CPU servers with MS SQL 2000 on them.
When HT is enabled we see 0% to 28% increase in performance for SOME
tasks, but not all.
The real performance increase is in reindexing the tables on a weekly or
monthly basis, at least for our applications were more than 2mm records
are imported every night m-f.
What I see with HT is that you "can" performance tune your SQL server
for it, but on a Single CPU Xeon server with HT enabled there seems to
be little difference, the most difference was on the Duals and the least
difference was on the Quads.
--
spam999free@.rrohio.com
remove 999 in order to email me
|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> BDB wrote:
> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
> distinguish between a logical and physical processor when deciding on
> parallel plans.
> --
> David Gugick
> Imceda Software
> www.imceda.com
David,
Thanks for your reply.
I've looked for an article on this topic for SP3. Can you specify a link or
KB #?
Thank you,
Bryan
|||BDB wrote:
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> David,
> Thanks for your reply.
> I've looked for an article on this topic for SP3. Can you specify a
> link or KB #?
> Thank you,
> Bryan
I'm afraid the information came from the SQL Server group. I don't know
if it was every documented, although I suspect it's somewhere on the MS
site. But here's a start:
http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
http://support.microsoft.com/kb/835864
David Gugick
Imceda Software
www.imceda.com
|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:OXp3rIzWFHA.3620@.TK2MSFTNGP09.phx.gbl...
> BDB wrote:
> I'm afraid the information came from the SQL Server group. I don't know if
> it was every documented, although I suspect it's somewhere on the MS site.
> But here's a start:
> http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
> http://support.microsoft.com/kb/835864
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
Thanks again.
|||I did some testing on HT processors using dual and quad cpu boxes (i.e. physical chips) with 500 and 1000 OLTP user loads. The HT processors added around 30% and in some cases even more performance. However, this was a strictly OLTP application with ve
ry targeted SELECT statements and oodles of single record INSERT, UPDATE, and DELETE statements. All in all, the HT processors provided a nice boost.
-Kev
~~~
-Kevin Kline
Quest Software (www.quest.com)
SQL Server MVP
I support PASS, the Professional Association for SQL Server. (www.sqlpass.org)
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
Effects of Intel Hyperthreading on SQL Server
Thank you,
BryanNo deleterious effects. Just makes one processor appear as two
(per-processor licensing is based on physical processors, not virtual).
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>|||Works pretty good most of the time. There are a few hotfixes that address
some obscure issues on highly scaled machines (8+ physical processors) but
those only apply to a very few systems. OLTP environments work very well on
HT boxes. I do limit the maximum dogree of parallelism to the actual
physical processor count to avoid over-saturating the box on a single query.
Geoff N. Hiten
Microsoft SQL Server MVP
"BDB" <reply@.to.group.com> wrote in message
news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>|||Consider limiting the number of processors user per query to max same number as your physical
processors. Also, if you get bad query plans, investigate the MAXDOP query hint.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"BDB" <reply@.to.group.com> wrote in message news:O%23iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl...
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
>|||BDB wrote:
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan
SQL Server 2000 SP3 was made "aware" of hyperthreading and will
distinguish between a logical and physical processor when deciding on
parallel plans.
--
David Gugick
Imceda Software
www.imceda.com|||In article <O#iTVCvWFHA.3240@.TK2MSFTNGP10.phx.gbl>, reply@.to.group.com
says...
> Can anyone share their knowledge of the subject?
We have single, dual and Quad Xeon CPU servers with MS SQL 2000 on them.
When HT is enabled we see 0% to 28% increase in performance for SOME
tasks, but not all.
The real performance increase is in reindexing the tables on a weekly or
monthly basis, at least for our applications were more than 2mm records
are imported every night m-f.
What I see with HT is that you "can" performance tune your SQL server
for it, but on a Single CPU Xeon server with HT enabled there seems to
be little difference, the most difference was on the Duals and the least
difference was on the Quads.
--
spam999free@.rrohio.com
remove 999 in order to email me|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> BDB wrote:
>> Can anyone share their knowledge of the subject?
>> Thank you,
>> Bryan
> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
> distinguish between a logical and physical processor when deciding on
> parallel plans.
> --
> David Gugick
> Imceda Software
> www.imceda.com
David,
Thanks for your reply.
I've looked for an article on this topic for SP3. Can you specify a link or
KB #?
Thank you,
Bryan|||BDB wrote:
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
>> BDB wrote:
>> Can anyone share their knowledge of the subject?
>> Thank you,
>> Bryan
>> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
>> distinguish between a logical and physical processor when deciding on
>> parallel plans.
>> --
>> David Gugick
>> Imceda Software
>> www.imceda.com
> David,
> Thanks for your reply.
> I've looked for an article on this topic for SP3. Can you specify a
> link or KB #?
> Thank you,
> Bryan
I'm afraid the information came from the SQL Server group. I don't know
if it was every documented, although I suspect it's somewhere on the MS
site. But here's a start:
http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
http://support.microsoft.com/kb/835864
David Gugick
Imceda Software
www.imceda.com|||"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:OXp3rIzWFHA.3620@.TK2MSFTNGP09.phx.gbl...
> BDB wrote:
>> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
>> news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
>> BDB wrote:
>> Can anyone share their knowledge of the subject?
>> Thank you,
>> Bryan
>> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
>> distinguish between a logical and physical processor when deciding on
>> parallel plans.
>> --
>> David Gugick
>> Imceda Software
>> www.imceda.com
>> David,
>> Thanks for your reply.
>> I've looked for an article on this topic for SP3. Can you specify a
>> link or KB #?
>> Thank you,
>> Bryan
> I'm afraid the information came from the SQL Server group. I don't know if
> it was every documented, although I suspect it's somewhere on the MS site.
> But here's a start:
> http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc
> http://support.microsoft.com/kb/835864
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
Thanks again.|||I did some testing on HT processors using dual and quad cpu boxes (i.e. physical chips) with 500 and 1000 OLTP user loads. The HT processors added around 30% and in some cases even more performance. However, this was a strictly OLTP application with very targeted SELECT statements and oodles of single record INSERT, UPDATE, and DELETE statements. All in all, the HT processors provided a nice boost.
-Kev
~~~
-Kevin Kline
Quest Software (www.quest.com)
SQL Server MVP
I support PASS, the Professional Association for SQL Server. (www.sqlpass.org)
> Can anyone share their knowledge of the subject?
> Thank you,
> Bryan|||While SP3 added some slight intelligence towards HT you really need Win2003
to take full advantage of it.
--
Andrew J. Kelly SQL MVP
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:u$G9YfvWFHA.3572@.TK2MSFTNGP12.phx.gbl...
> BDB wrote:
>> Can anyone share their knowledge of the subject?
>> Thank you,
>> Bryan
> SQL Server 2000 SP3 was made "aware" of hyperthreading and will
> distinguish between a logical and physical processor when deciding on
> parallel plans.
> --
> David Gugick
> Imceda Software
> www.imceda.com
effects of inserting while selecting
done. This causes both clients to time out. As a whole does selecting from a
table during the time that inserts are being down slow down the inserts? or
do we have some indexing problems?
thanks
mlblastvm wrote:
> We have a situation where a burst of inserts occurs while a select is being
> done. This causes both clients to time out. As a whole does selecting from a
> table during the time that inserts are being down slow down the inserts? or
> do we have some indexing problems?
> thanks
My money is on an indexing problem... Review the execution plan for
some of your SELECT statements - any scans, particularly table scans,
are likely sources of contention and opportunities for improved indexing.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||the execution plans look fine. One application is performing at least 50,000
inserts in succession. The other application is reading around 22000 rows.
Actually to be more specific here is what happens. Application A reads out
20k records from Table A and then inserts around that many to the database
(different table though). Application B at the same time inserts records into
Table A. Do the inserts from one application slow down the inserts from the
other application because the heads have to move around?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> My money is on an indexing problem... Review the execution plan for
> some of your SELECT statements - any scans, particularly table scans,
> are likely sources of contention and opportunities for improved indexing.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>
|||mlblastvm wrote:
> the execution plans look fine. One application is performing at least 50,000
> inserts in succession. The other application is reading around 22000 rows.
> Actually to be more specific here is what happens. Application A reads out
> 20k records from Table A and then inserts around that many to the database
> (different table though). Application B at the same time inserts records into
> Table A. Do the inserts from one application slow down the inserts from the
> other application because the heads have to move around?
>
Well, inserting data into a database certainly does produce disk I/O,
particularly against the transaction log. The rule of thumb is that the
transaction log and database files are place on seperate volumes, with
the transaction log being on an I/O channel that provides the fastest
possible write throughput.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||In our case the logs are on separate volumes. I assume based on what you are
saying that doing a select while inserts are being done will have an impact
on both the select and the insert?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> Well, inserting data into a database certainly does produce disk I/O,
> particularly against the transaction log. The rule of thumb is that the
> transaction log and database files are place on seperate volumes, with
> the transaction log being on an I/O channel that provides the fastest
> possible write throughput.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>
|||mlblastvm wrote:
> In our case the logs are on separate volumes. I assume based on what you are
> saying that doing a select while inserts are being done will have an impact
> on both the select and the insert?
>
I was simply responding to your question about the "heads moving around".
An INSERT can block a SELECT if that insert has locked data pages (or
even the entire table) that the SELECT is attempting to read. The
SELECT will be forced to wait until the lock has been released.
Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
is currently reading, the INSERT will potentially have to wait. It all
depends on the lock types being used. Having good indexes in place to
support the SELECT statements will help to alleviate some of this.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||I guess the real question I have is this. In a case where locking is not
occuring is it reasonable to assume that selecting from a database while
doing inserts to it will cause both the select and insert to take longer than
if they were to be done independantly. If that is the case what exactly
accounts for this?
thanks
"Tracy McKibben" wrote:
> mlblastvm wrote:
> I was simply responding to your question about the "heads moving around".
> An INSERT can block a SELECT if that insert has locked data pages (or
> even the entire table) that the SELECT is attempting to read. The
> SELECT will be forced to wait until the lock has been released.
> Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
> is currently reading, the INSERT will potentially have to wait. It all
> depends on the lock types being used. Having good indexes in place to
> support the SELECT statements will help to alleviate some of this.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>
|||mlblastvm wrote:
> I guess the real question I have is this. In a case where locking is not
> occuring is it reasonable to assume that selecting from a database while
> doing inserts to it will cause both the select and insert to take longer than
> if they were to be done independantly. If that is the case what exactly
> accounts for this?
> thanks
You will ALWAYS have locking - that locking may or may not cause
contention, which will appear to slow things down.
If things slow down during these large inserts, the only reasons that I
can think of are:
1. Locks being created by the INSERT are blocking other processes
2. The disks are not capable of keeping up with the I/O demands
produced by the influx of new data from the INSERT
3. The new data from the INSERT is requiring the database and/or
transaction log file to grow, resulting in a delay while that growth
takes place.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||I would add:
4) Not enough memory to hold all the pages required for the insert and all
the pages required for the select in memory at once causing extra IO and
processor activity swapping pages in and out.
5) Enough data being transferred in and out to cause network contention on
the TDS connections
6) Becoming processor bound - especially if the select requires sorting of
results
7) Checkpoint interval low enough so inserts are causing frequent
checkpoints saturating the mdf disk drive
In general, if you're running that close to the margins, you probably need
to bump up your timeouts.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:45537D29.2060702@.realsqlguy.com...
> mlblastvm wrote:
> You will ALWAYS have locking - that locking may or may not cause
> contention, which will appear to slow things down.
> If things slow down during these large inserts, the only reasons that I
> can think of are:
> 1. Locks being created by the INSERT are blocking other processes
> 2. The disks are not capable of keeping up with the I/O demands produced
> by the influx of new data from the INSERT
> 3. The new data from the INSERT is requiring the database and/or
> transaction log file to grow, resulting in a delay while that growth takes
> place.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
effects of inserting while selecting
done. This causes both clients to time out. As a whole does selecting from a
table during the time that inserts are being down slow down the inserts? or
do we have some indexing problems?
thanksmlblastvm wrote:
> We have a situation where a burst of inserts occurs while a select is being
> done. This causes both clients to time out. As a whole does selecting from a
> table during the time that inserts are being down slow down the inserts? or
> do we have some indexing problems?
> thanks
My money is on an indexing problem... Review the execution plan for
some of your SELECT statements - any scans, particularly table scans,
are likely sources of contention and opportunities for improved indexing.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||the execution plans look fine. One application is performing at least 50,000
inserts in succession. The other application is reading around 22000 rows.
Actually to be more specific here is what happens. Application A reads out
20k records from Table A and then inserts around that many to the database
(different table though). Application B at the same time inserts records into
Table A. Do the inserts from one application slow down the inserts from the
other application because the heads have to move around?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> > We have a situation where a burst of inserts occurs while a select is being
> > done. This causes both clients to time out. As a whole does selecting from a
> > table during the time that inserts are being down slow down the inserts? or
> > do we have some indexing problems?
> > thanks
> My money is on an indexing problem... Review the execution plan for
> some of your SELECT statements - any scans, particularly table scans,
> are likely sources of contention and opportunities for improved indexing.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> the execution plans look fine. One application is performing at least 50,000
> inserts in succession. The other application is reading around 22000 rows.
> Actually to be more specific here is what happens. Application A reads out
> 20k records from Table A and then inserts around that many to the database
> (different table though). Application B at the same time inserts records into
> Table A. Do the inserts from one application slow down the inserts from the
> other application because the heads have to move around?
>
Well, inserting data into a database certainly does produce disk I/O,
particularly against the transaction log. The rule of thumb is that the
transaction log and database files are place on seperate volumes, with
the transaction log being on an I/O channel that provides the fastest
possible write throughput.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||In our case the logs are on separate volumes. I assume based on what you are
saying that doing a select while inserts are being done will have an impact
on both the select and the insert?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> > the execution plans look fine. One application is performing at least 50,000
> > inserts in succession. The other application is reading around 22000 rows.
> > Actually to be more specific here is what happens. Application A reads out
> > 20k records from Table A and then inserts around that many to the database
> > (different table though). Application B at the same time inserts records into
> > Table A. Do the inserts from one application slow down the inserts from the
> > other application because the heads have to move around?
> >
> Well, inserting data into a database certainly does produce disk I/O,
> particularly against the transaction log. The rule of thumb is that the
> transaction log and database files are place on seperate volumes, with
> the transaction log being on an I/O channel that provides the fastest
> possible write throughput.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> In our case the logs are on separate volumes. I assume based on what you are
> saying that doing a select while inserts are being done will have an impact
> on both the select and the insert?
>
I was simply responding to your question about the "heads moving around".
An INSERT can block a SELECT if that insert has locked data pages (or
even the entire table) that the SELECT is attempting to read. The
SELECT will be forced to wait until the lock has been released.
Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
is currently reading, the INSERT will potentially have to wait. It all
depends on the lock types being used. Having good indexes in place to
support the SELECT statements will help to alleviate some of this.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||I guess the real question I have is this. In a case where locking is not
occuring is it reasonable to assume that selecting from a database while
doing inserts to it will cause both the select and insert to take longer than
if they were to be done independantly. If that is the case what exactly
accounts for this?
thanks
"Tracy McKibben" wrote:
> mlblastvm wrote:
> > In our case the logs are on separate volumes. I assume based on what you are
> > saying that doing a select while inserts are being done will have an impact
> > on both the select and the insert?
> >
> I was simply responding to your question about the "heads moving around".
> An INSERT can block a SELECT if that insert has locked data pages (or
> even the entire table) that the SELECT is attempting to read. The
> SELECT will be forced to wait until the lock has been released.
> Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
> is currently reading, the INSERT will potentially have to wait. It all
> depends on the lock types being used. Having good indexes in place to
> support the SELECT statements will help to alleviate some of this.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> I guess the real question I have is this. In a case where locking is not
> occuring is it reasonable to assume that selecting from a database while
> doing inserts to it will cause both the select and insert to take longer than
> if they were to be done independantly. If that is the case what exactly
> accounts for this?
> thanks
You will ALWAYS have locking - that locking may or may not cause
contention, which will appear to slow things down.
If things slow down during these large inserts, the only reasons that I
can think of are:
1. Locks being created by the INSERT are blocking other processes
2. The disks are not capable of keeping up with the I/O demands
produced by the influx of new data from the INSERT
3. The new data from the INSERT is requiring the database and/or
transaction log file to grow, resulting in a delay while that growth
takes place.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||I would add:
4) Not enough memory to hold all the pages required for the insert and all
the pages required for the select in memory at once causing extra IO and
processor activity swapping pages in and out.
5) Enough data being transferred in and out to cause network contention on
the TDS connections
6) Becoming processor bound - especially if the select requires sorting of
results
7) Checkpoint interval low enough so inserts are causing frequent
checkpoints saturating the mdf disk drive
In general, if you're running that close to the margins, you probably need
to bump up your timeouts.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:45537D29.2060702@.realsqlguy.com...
> mlblastvm wrote:
>> I guess the real question I have is this. In a case where locking is not
>> occuring is it reasonable to assume that selecting from a database while
>> doing inserts to it will cause both the select and insert to take longer
>> than if they were to be done independantly. If that is the case what
>> exactly accounts for this?
>> thanks
> You will ALWAYS have locking - that locking may or may not cause
> contention, which will appear to slow things down.
> If things slow down during these large inserts, the only reasons that I
> can think of are:
> 1. Locks being created by the INSERT are blocking other processes
> 2. The disks are not capable of keeping up with the I/O demands produced
> by the influx of new data from the INSERT
> 3. The new data from the INSERT is requiring the database and/or
> transaction log file to grow, resulting in a delay while that growth takes
> place.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
effects of inserting while selecting
done. This causes both clients to time out. As a whole does selecting from a
table during the time that inserts are being down slow down the inserts? or
do we have some indexing problems?
thanksmlblastvm wrote:
> We have a situation where a burst of inserts occurs while a select is bein
g
> done. This causes both clients to time out. As a whole does selecting from
a
> table during the time that inserts are being down slow down the inserts? o
r
> do we have some indexing problems?
> thanks
My money is on an indexing problem... Review the execution plan for
some of your SELECT statements - any scans, particularly table scans,
are likely sources of contention and opportunities for improved indexing.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||the execution plans look fine. One application is performing at least 50,000
inserts in succession. The other application is reading around 22000 rows.
Actually to be more specific here is what happens. Application A reads out
20k records from Table A and then inserts around that many to the database
(different table though). Application B at the same time inserts records int
o
Table A. Do the inserts from one application slow down the inserts from the
other application because the heads have to move around?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> My money is on an indexing problem... Review the execution plan for
> some of your SELECT statements - any scans, particularly table scans,
> are likely sources of contention and opportunities for improved indexing.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> the execution plans look fine. One application is performing at least 50,0
00
> inserts in succession. The other application is reading around 22000 rows.
> Actually to be more specific here is what happens. Application A reads out
> 20k records from Table A and then inserts around that many to the databas
e
> (different table though). Application B at the same time inserts records i
nto
> Table A. Do the inserts from one application slow down the inserts from th
e
> other application because the heads have to move around?
>
Well, inserting data into a database certainly does produce disk I/O,
particularly against the transaction log. The rule of thumb is that the
transaction log and database files are place on seperate volumes, with
the transaction log being on an I/O channel that provides the fastest
possible write throughput.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||In our case the logs are on separate volumes. I assume based on what you are
saying that doing a select while inserts are being done will have an impact
on both the select and the insert?
"Tracy McKibben" wrote:
> mlblastvm wrote:
> Well, inserting data into a database certainly does produce disk I/O,
> particularly against the transaction log. The rule of thumb is that the
> transaction log and database files are place on seperate volumes, with
> the transaction log being on an I/O channel that provides the fastest
> possible write throughput.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> In our case the logs are on separate volumes. I assume based on what you a
re
> saying that doing a select while inserts are being done will have an impac
t
> on both the select and the insert?
>
I was simply responding to your question about the "heads moving around".
An INSERT can block a SELECT if that insert has locked data pages (or
even the entire table) that the SELECT is attempting to read. The
SELECT will be forced to wait until the lock has been released.
Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
is currently reading, the INSERT will potentially have to wait. It all
depends on the lock types being used. Having good indexes in place to
support the SELECT statements will help to alleviate some of this.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||I guess the real question I have is this. In a case where locking is not
occuring is it reasonable to assume that selecting from a database while
doing inserts to it will cause both the select and insert to take longer tha
n
if they were to be done independantly. If that is the case what exactly
accounts for this?
thanks
"Tracy McKibben" wrote:
> mlblastvm wrote:
> I was simply responding to your question about the "heads moving around".
> An INSERT can block a SELECT if that insert has locked data pages (or
> even the entire table) that the SELECT is attempting to read. The
> SELECT will be forced to wait until the lock has been released.
> Conversely, if an INSERT wants to lock a page (or a table) that a SELECT
> is currently reading, the INSERT will potentially have to wait. It all
> depends on the lock types being used. Having good indexes in place to
> support the SELECT statements will help to alleviate some of this.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
>|||mlblastvm wrote:
> I guess the real question I have is this. In a case where locking is not
> occuring is it reasonable to assume that selecting from a database while
> doing inserts to it will cause both the select and insert to take longer t
han
> if they were to be done independantly. If that is the case what exactly
> accounts for this?
> thanks
You will ALWAYS have locking - that locking may or may not cause
contention, which will appear to slow things down.
If things slow down during these large inserts, the only reasons that I
can think of are:
1. Locks being created by the INSERT are blocking other processes
2. The disks are not capable of keeping up with the I/O demands
produced by the influx of new data from the INSERT
3. The new data from the INSERT is requiring the database and/or
transaction log file to grow, resulting in a delay while that growth
takes place.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||I would add:
4) Not enough memory to hold all the pages required for the insert and all
the pages required for the select in memory at once causing extra IO and
processor activity swapping pages in and out.
5) Enough data being transferred in and out to cause network contention on
the TDS connections
6) Becoming processor bound - especially if the select requires sorting of
results
7) Checkpoint interval low enough so inserts are causing frequent
checkpoints saturating the mdf disk drive
In general, if you're running that close to the margins, you probably need
to bump up your timeouts.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:45537D29.2060702@.realsqlguy.com...
> mlblastvm wrote:
> You will ALWAYS have locking - that locking may or may not cause
> contention, which will appear to slow things down.
> If things slow down during these large inserts, the only reasons that I
> can think of are:
> 1. Locks being created by the INSERT are blocking other processes
> 2. The disks are not capable of keeping up with the I/O demands produced
> by the influx of new data from the INSERT
> 3. The new data from the INSERT is requiring the database and/or
> transaction log file to grow, resulting in a delay while that growth takes
> place.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
Effects of changing a table name
Partners, will it update the stored procedures that reference the old table
name "carriers" to the new "partners" name? I am talking about using the
rename function in the rename function in the micosoft SQL Server managment
studio program. Or will I have to edit my 1,351 stored procedures manually?
thanks!> If I rename a table on my SQL Server 2005 database say from Carriers to
> Partners, will it update the stored procedures that reference the old
> table name "carriers" to the new "partners" name?
No, SQL Server is not going to go and edit your code for you. It would be
hard enough to find all the places it is referenced in vanilla T-SQL (and
this would rely on a correct sysdepends, rather than deferred name
resolution), what about all the places where the table name could be
referenced dynamically (e.g. EXEC('SELECT * INTO t1 FROM Carriers', or
EXEC('SELECT * INTO t1 FROM Car'+'riers'), or EXEC('SELECT * FROM
'+@.TableName))? Even if you took your procedure code offline and did a
grep/replace, you're still not guaranteed to find every single location.
> I am talking about using the rename function in the rename function in the
> micosoft SQL Server managment studio program. Or will I have to edit my
> 1,351 stored procedures manually?
An alternative would be to create a synonym.
A
Effects of a Database restoration.
Does a database restoration perform any
update-statistics/defragmentation by default. We observed a marked
improvement in performance when we restored the a database from an high
end machine to a low machine.
Could someone shed some light on this.
Regards,
Thyagarajan Delli.Thyagu (tdelli@.gmail.com) writes:
> Does a database restoration perform any
> update-statistics/defragmentation by default.
As far as I know, no. Except, that if the MDF is very fragmented on the
source machine, and the target machine has space to accept it as
contiguous, you will see defragmentation on that level.
> We observed a marked improvement in performance when we restored the a
> database from an high end machine to a low machine.
Maybe the low-end machine has a single CPU? SQL Server sometimes goes
for parallel plans on multi-CPU machines that are not very efficient
at all.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
effect of more column in where clause
what are the effects of having number of columns in WHERE clause, ie: if we use more columns in where clause, what will be its impact on performance.
this is important for me to design queries
having a lot of unneccessary where clauses will surely slowdown the query if there are.
the best thing that you can do is to streamline the logic into its simplest form
|||Perhaps I misunderstand the question; my normal experience is that if the additional columns contribute to filtering out unwanted records that the additional columns in the where clause usually enhance performance.|||yeah thats right but a poorly written where clause will definetely slowdown performance.
The trick is to use the best suited functions and keywords in the where clause
here some useful link:
http://www.sql-server-performance.com/transact_sql.asp
regards,
joey
|||Hi joeydj,
what do you mean by "the trick is to use the best suited where clause by making use of the powerful sql server functions"?
AMB
|||oh sorry. got a grammar problem
here i have it corrected
"The trick is to use the best suited functions and keywords in the where clause"
hmmm... thats better..
example:
select ... from
|||where x=1 and x=2 and x=7 and x=9
maybe written as
where x in (1,2,7,9)
Joey is normally better than that; please cut him some slack. I think what he means is
select ... from
where x=1 OR x=2 OR x=7 OR x=9
maybe written as
where x in (1,2,7,9)
|||
hahaha. not been here for sometime.
hmmm thanks kent.
|||Not to mention that there is no performance benefit in choosing one of these as opposed to the other, anyway. (There is a readability benefit, and the two forms will behave differently if you carelessly tack on AND <another condition> to the query.
Steve Kass
Drew University
http://www.stevekass.com
|||Can you give an example of the choices you have to make? Usually if you add "more columns in where clause", you change the meaning of the query (but not always), and the first goal of designing a query is for it to ask the right question...
Steve Kass
Drew University
http://www.stevekass.com
|||One thing that people sometimes forget is that a full table scan can sometimes be better than an indexed scan. If you are ultimately going to read every block of data from disk via an index scan then you might want to forgo the index. Modern databases with good statistics can usually establish an execution plan that is "good enough" but it can still pay to understand your data.
Here is a quick example.
Let's say I have a database of people and for some reason their geneder in overwhelmingly biased in one direction. An optimizer might look at the number of unique values for gender and assume a 50/50 split in the data. It might make good sence to always use an index on gender to access the rows of the table from the optimizer's best guess. However, that might not be the case in practice.
Let's say the table is 100k records with :
90% F
10% M
To find the men an indexed lookup on gender is probably a good thing. On the other hand an indexed lookup to find women will result in more disk IO and slower performance than a full table scan (not counting your network).
Knowing this distribution ahead of time might lead someone looking for all women to do something like
Select * from employees where gender + '' = 'F'
|||thanks for all your views.
let me give u a specific example.
my database has "branch name" field in all the tables. and we have seperate copy of database for each branch. so a particular branch user will connect to his branch, which has records only for that branch.
in this situation, there is no need to filter the records again with "branch name", but if there is no performance issue , i wish to include it in the WHERE clause to be 100% sure that all the records that the query output does have the same branch name.
|||In your situation. I think you could add "branch name" column without perfomance issue. But you need to create index or statictics for this column. As a result query optimizer understands that "branch name" same for all records and doesn't use them for plan.
But it any case your could check query plans and only after this decide
|||This is a second good example of what I am talking about.
If your database is physically segragated by branch already then you want to make sure you know what the optimizer is doing when you add a "failsafe"
Where BranchId = 10
or alternatively
Where BranchName = 'Downtown'
to your queries.
If adding that clause causes the optimizer to include an index on BranchId as part of the execution plan then you will hurt your performance as a result of extra disk i/o and memory use. This might or might not be a concern for your infrastructure. Though it sounds like if you are physically partitioning your database by branch then you might be concerned about database performance.
Although it might seem counter intuative, it might be a good idea to experiment with:
Where BranchId + 0 = 10
or alternatively
Where BranchName + '' = 'Downtown'
This would ensure than no index on BranchId could be used in the execution plan.
|||Dear AMERMSAMER,
The bottom line is that having more columns in the WHERE clause does not necessarily slow down performance and may, in fact, actually improve performance. It all depends on the indexes on the table and the "selectivity" of the columns in the WHERE. If there is a unique key and all of the columns in the key are "covered" by the WHERE clause, no more may be needed. SQL will try to optimize the query by using these columns where available.
effect of more column in where clause
what are the effects of having number of columns in WHERE clause, ie: if we use more columns in where clause, what will be its impact on performance.
this is important for me to design queries
having a lot of unneccessary where clauses will surely slowdown the query if there are.
the best thing that you can do is to streamline the logic into its simplest form
|||Perhaps I misunderstand the question; my normal experience is that if the additional columns contribute to filtering out unwanted records that the additional columns in the where clause usually enhance performance.|||yeah thats right but a poorly written where clause will definetely slowdown performance.
The trick is to use the best suited functions and keywords in the where clause
here some useful link:
http://www.sql-server-performance.com/transact_sql.asp
regards,
joey
|||Hi joeydj,
what do you mean by "the trick is to use the best suited where clause by making use of the powerful sql server functions"?
AMB
|||oh sorry. got a grammar problem
here i have it corrected
"The trick is to use the best suited functions and keywords in the where clause"
hmmm... thats better..
example:
select ... from
|||where x=1 and x=2 and x=7 and x=9
maybe written as
where x in (1,2,7,9)
Joey is normally better than that; please cut him some slack. I think what he means is
select ... from
where x=1 OR x=2 OR x=7 OR x=9
maybe written as
where x in (1,2,7,9)
|||
hahaha. not been here for sometime.
hmmm thanks kent.
|||Not to mention that there is no performance benefit in choosing one of these as opposed to the other, anyway. (There is a readability benefit, and the two forms will behave differently if you carelessly tack on AND <another condition> to the query.
Steve Kass
Drew University
http://www.stevekass.com
|||Can you give an example of the choices you have to make? Usually if you add "more columns in where clause", you change the meaning of the query (but not always), and the first goal of designing a query is for it to ask the right question...
Steve Kass
Drew University
http://www.stevekass.com
|||One thing that people sometimes forget is that a full table scan can sometimes be better than an indexed scan. If you are ultimately going to read every block of data from disk via an index scan then you might want to forgo the index. Modern databases with good statistics can usually establish an execution plan that is "good enough" but it can still pay to understand your data.
Here is a quick example.
Let's say I have a database of people and for some reason their geneder in overwhelmingly biased in one direction. An optimizer might look at the number of unique values for gender and assume a 50/50 split in the data. It might make good sence to always use an index on gender to access the rows of the table from the optimizer's best guess. However, that might not be the case in practice.
Let's say the table is 100k records with :
90% F
10% M
To find the men an indexed lookup on gender is probably a good thing. On the other hand an indexed lookup to find women will result in more disk IO and slower performance than a full table scan (not counting your network).
Knowing this distribution ahead of time might lead someone looking for all women to do something like
Select * from employees where gender + '' = 'F'
|||thanks for all your views.
let me give u a specific example.
my database has "branch name" field in all the tables. and we have seperate copy of database for each branch. so a particular branch user will connect to his branch, which has records only for that branch.
in this situation, there is no need to filter the records again with "branch name", but if there is no performance issue , i wish to include it in the WHERE clause to be 100% sure that all the records that the query output does have the same branch name.
|||In your situation. I think you could add "branch name" column without perfomance issue. But you need to create index or statictics for this column. As a result query optimizer understands that "branch name" same for all records and doesn't use them for plan.
But it any case your could check query plans and only after this decide
|||This is a second good example of what I am talking about.
If your database is physically segragated by branch already then you want to make sure you know what the optimizer is doing when you add a "failsafe"
Where BranchId = 10
or alternatively
Where BranchName = 'Downtown'
to your queries.
If adding that clause causes the optimizer to include an index on BranchId as part of the execution plan then you will hurt your performance as a result of extra disk i/o and memory use. This might or might not be a concern for your infrastructure. Though it sounds like if you are physically partitioning your database by branch then you might be concerned about database performance.
Although it might seem counter intuative, it might be a good idea to experiment with:
Where BranchId + 0 = 10
or alternatively
Where BranchName + '' = 'Downtown'
This would ensure than no index on BranchId could be used in the execution plan.
|||Dear AMERMSAMER,
The bottom line is that having more columns in the WHERE clause does not necessarily slow down performance and may, in fact, actually improve performance. It all depends on the indexes on the table and the "selectivity" of the columns in the WHERE. If there is a unique key and all of the columns in the key are "covered" by the WHERE clause, no more may be needed. SQL will try to optimize the query by using these columns where available.