Showing posts with label statement. Show all posts
Showing posts with label statement. Show all posts

Thursday, March 29, 2012

Embedded Select vs. UDF

Why is using an embedded SELECT statement faster than using an embedded UDF that has the same exact SQL code within?

Example (not syntax corrected)
Ex1:
SELECT myValue1, myValue2 FROM myTable WHERE aValue = (SELECT Value FROM someTable WHERE myIndex = 800)

is much faster than something like

Ex2:
SELECT myValue1, myValue2 FROM myTable WHERE aValue = (dbo.FN_myUDF(@.vmyIndex))

Given that dbo.FN_myUDF has the same code as the embedded select in the first example.

TIA,

KBActually I'm getting quite an opposite result:

select * from authors where au_id = (select au_id from authors where au_lname = 'Dull')

vs.

create function dbo.fn_get_au_id (
@.lname varchar(50) ) returns char(11)
as begin
declare @.au_id char(11)
select @.au_id = au_id from authors where au_lname = @.lname
return @.au_id
end
go

+

select * from authors where au_id = dbo.fn_get_au_id('Dull')

The first results in: Table 'authors'. Scan count 2, logical reads 3, physical reads 0, read-ahead reads 0.

The second yields: Table 'authors'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.|||Perhaps because SQL Server is able to take your poorly written code:

SELECT myValue1, myValue2 FROM myTable WHERE aValue = (SELECT Value FROM someTable WHERE myIndex = 800)

...and transform it into the much more efficient:

SELECT myValue1, myValue2
FROM myTable
INNER JOIN someTable on myTable.aValue = someTable.Value
WHERE someTable.myIndex = 800

...prior to executing it. The optimizer will streamline your statement whenever it can, and thus make use of any indexes on the tables and only make one pass through the subtable. Strictly following your code logic would result in a pass through someTable for every record in myTable, which is also what occurs when you call the UDF. The compiler wants none of that nonsense and fixes your code before executing it.|||Well, I just used the same sample code as you (Mr. Lindman) provided and converted it to my previously posted comparative case...and...hmmmmm...It is actually WORSE than the other two:

select * from authors a
inner join authors b
on a.au_id = b.au_id
where b.au_lname = 'Dull'

results in: Table 'authors'. Scan count 2, logical reads 5, physical reads 0, read-ahead reads 0.

In fact, even JOIN hints don't make any difference. Ironic, isn't it? ;)|||My bad. I was thinking he had the function call in his select clause. In the WHERE clause the UDF is only executed once. I get faster results with the UDF as well, though with a larger dataset I get identical results and execution plans using either the subquery method or the join method.sql

Tuesday, March 27, 2012

Embed SQL statement in another SQL statement

Suppose I have an MS access database with query1 and query2. Query2 uses
query1 something like this:

Select table.* from (table left join query1 on field1=field2)

Now I want to make a change in query1 to change the behaviour of query2. But
this cannot be done in a live environment because I have to shut down the
application, make the change in the database and start the application
again. What I can do is make a query 3 in my application (instead of in the
database) that is initially does the same as query2 like this:

Select table.* from (table left join -->some SQL code to embed query1<-- on
field1=field2)

Once I know how to do this, I can change the "embedded" sql code to my
needs.

Question is: how do I "embed" SQL code in another statement?"John Kenickney" <spam.trap@.bigfoot.com> wrote in message news:<b0909e7389d49cec0cf4ddf5330bce77@.news.teranews.com >...
> Suppose I have an MS access database with query1 and query2. Query2 uses
> query1 something like this:
> Select table.* from (table left join query1 on field1=field2)
> Now I want to make a change in query1 to change the behaviour of query2. But
> this cannot be done in a live environment because I have to shut down the
> application, make the change in the database and start the application
> again. What I can do is make a query 3 in my application (instead of in the
> database) that is initially does the same as query2 like this:
> Select table.* from (table left join -->some SQL code to embed query1<-- on
> field1=field2)
> Once I know how to do this, I can change the "embedded" sql code to my
> needs.
> Question is: how do I "embed" SQL code in another statement?

You should probably post this in an Access forum to get a better response.

Simon

Friday, March 9, 2012

EM SQL Scripts

The EM can produce SQL Scripts of tables/indexes/constraints/default values etc from a database. When doing so, the create statement for tables and the alter statements for constraints (including the default values) are separated. In effect, the create table-statement will only create the table; keys, default values, indexes are changes on the table. My question is: when saving the results in separated files as a basis to continue working on, would you take the effort of changing the create-tables statements so that will also fill in the default values or keep it separate?I've never bothered to mangle the generated scripts... I've never seen any need for it.

-PatP|||Pat: Thing is, there's a db here without any of the tables/views/sp's stored on disk making it, imho, an unhealthy basis to continue building on. So I've scripted the db layout, views and all that allowing me to create a dev. I'm just considering whether I should take the effort or not; when changes are made to the db/tables I'll process it into the create-scripts as well.
I wonder though, do you have any create scripts that create all the tables in a database, do you keep it separate from the indexes/constraints? I don't want to start a flame-thing on "who's responsible", just wondering if you'd take the trouble?|||I'm not really clear what you are asking, so this response may seem way off base.

If you are talking about the option that causes Enterprise Mangler to script the PK/FK/Default/Check Constraints as part of the table versus scripting the table "raw" then creating ALTER TABLE statements to add the constraints, I usually don't care how it gets done. If there was some compelling reason to set it one way or the other, then I would. Since the net effect after running the whole script will be the same, I've never bothered to monkey with it... Either way works fine for me, and produces the same results in the end.

-PatP

Wednesday, March 7, 2012

EM query strange behavior

In the Enterprise Manager query tool, a query (the date entered in the
criteria field as 31.03.05) displays as an SQL statement:
SELECT *
FROM [TABLE]
WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
and yields the expected results.
However, changing query type to
DELETE FROM [SAME TABLE AS ABOVE]
WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
results in the opposite of what is to be expected - all records where
TRANS_DATE < [the entered time] is deleted.
The TRANS_DATE field is of type DATETIME, and I cannot see that there are
any properties either in the field or the table that separates it from all
my other DATETIME fields in any other tables, where this bizarre delete
action does not occur.
I assume this has to do with some configuration of the automatic
timeconversion that happens in the tool itself, but how to change this
conversion setting so as to produce more desired delete results?
And how is it that it only happens with data from one single table'
Thanks for answering.
--
Message posted via http://www.sqlmonster.comHi
What happens when you run this:
DELETE FROM [SAME TABLE AS ABOVE]
WHERE (TRANS_DATE > '2005-03-31 00:00:00'
Regards
Mike
"Knut Bohn via SQLMonster.com" wrote:
> In the Enterprise Manager query tool, a query (the date entered in the
> criteria field as 31.03.05) displays as an SQL statement:
> SELECT *
> FROM [TABLE]
> WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
> and yields the expected results.
> However, changing query type to
> DELETE FROM [SAME TABLE AS ABOVE]
> WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
> results in the opposite of what is to be expected - all records where
> TRANS_DATE < [the entered time] is deleted.
> The TRANS_DATE field is of type DATETIME, and I cannot see that there are
> any properties either in the field or the table that separates it from all
> my other DATETIME fields in any other tables, where this bizarre delete
> action does not occur.
> I assume this has to do with some configuration of the automatic
> timeconversion that happens in the tool itself, but how to change this
> conversion setting so as to produce more desired delete results?
> And how is it that it only happens with data from one single table'
> Thanks for answering.
> --
> Message posted via http://www.sqlmonster.com
>|||Knut
I dont uderstand you.
> In the Enterprise Manager query tool, a query (the date entered in the
> criteria field as 31.03.05) displays as an SQL statement:
You said that the table's column is defined as datetime datatype and data
was entered as 31.03.05 which is thrown the error
--insert into table1 values ('31.03.05')
"The conversion of a char data type to a datetime data type resulted in an
out-of-range datetime value.
The statement has been terminated."
"Knut Bohn via SQLMonster.com" <forum@.nospam.SQLMonster.com> wrote in
message news:b10f4c5d93df49c6b3c31bee2a957db0@.SQLMonster.com...
> In the Enterprise Manager query tool, a query (the date entered in the
> criteria field as 31.03.05) displays as an SQL statement:
> SELECT *
> FROM [TABLE]
> WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
> and yields the expected results.
> However, changing query type to
> DELETE FROM [SAME TABLE AS ABOVE]
> WHERE (TRANS_DATE > CONVERT(DATETIME, '2005-03-31 00:00:00', 102))
> results in the opposite of what is to be expected - all records where
> TRANS_DATE < [the entered time] is deleted.
> The TRANS_DATE field is of type DATETIME, and I cannot see that there are
> any properties either in the field or the table that separates it from all
> my other DATETIME fields in any other tables, where this bizarre delete
> action does not occur.
> I assume this has to do with some configuration of the automatic
> timeconversion that happens in the tool itself, but how to change this
> conversion setting so as to produce more desired delete results?
> And how is it that it only happens with data from one single table'
> Thanks for answering.
> --
> Message posted via http://www.sqlmonster.com|||Mike;
Same thing - deletes records with TRANS_DATE < entered value.
What's really bizzarre is that there's another datetime column in the
table, and running a delete with criteria filtered against those date
values yields the expected result.
Again, I really can't spot any differences in the respective data
properties of these two columns.
Uri;
I don't think I follow what you're getting at? The SELECT runs fine with
the date criteria entered as 31.03.05, but somehow things go off
differently with no other change thatn replacing SELECT with DELETE.
--
Message posted via http://www.sqlmonster.com|||Hi
I did some test
CREATE TABLE #Test
(
col DATETIME
)
INSERT INTO #Test VALUES ('20050331')--Proper format to insert datetime
INSERT INTO #Test VALUES ('20050401')
INSERT INTO #Test VALUES ('20050328')
DELETE FROM #Test WHERE col>'20050331'--Deletes 1 row
SELECT* FROM #Test--Everything is ok
DROP TABLE #Test
--Another repro
CREATE TABLE #Test
(
col VARCHAR(30)
)
INSERT INTO #Test VALUES ('31.03.05')--with datetime this insert will be
failed
INSERT INTO #Test VALUES ('28.03.05')--with datetime this insert will be
failed
INSERT INTO #Test VALUES ('01.04.05')--with datetime this insert will be
failed
DELETE FROM #Test WHERE col>CONVERT(DATETIME,'2005-03-31',102)--Error is
thrown
SELECT CONVERT(DATETIME,REPLACE(col,'.',''),102) FROM #Test
--2031-03-05 00:00:00.000
--2028-03-05 00:00:00.000
--2001-04-05 00:00:00.000
--That's why it deletes wrong data
DELETE FROM #Test WHERE CONVERT(DATETIME,REPLACE(col,'.',''),112)
>CONVERT(DATETIME,'2005-03-31',102)
SELECT* FROM #Test
DROP TABLE #Test
"Knut Bohn via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:c446a5e8529b4bc49709d20ca6d2518d@.SQLMonster.com...
> Mike;
> Same thing - deletes records with TRANS_DATE < entered value.
> What's really bizzarre is that there's another datetime column in the
> table, and running a delete with criteria filtered against those date
> values yields the expected result.
> Again, I really can't spot any differences in the respective data
> properties of these two columns.
> Uri;
> I don't think I follow what you're getting at? The SELECT runs fine with
> the date criteria entered as 31.03.05, but somehow things go off
> differently with no other change thatn replacing SELECT with DELETE.
> --
> Message posted via http://www.sqlmonster.com|||Uri,
Thaanks for taking the time to test this. However, I don't think it
explains the criteria oddity that occurs when in the query changing from
SELECT to DELETE?
best,
knut
--
Message posted via http://www.sqlmonster.com

Sunday, February 26, 2012

Else statement is not working in the stored procedure

I have a stored procedure that needs to populate the fields of records for tblBag_data from the tblshipping_sched if not found it looks in the tblshipment_history. But it not checking the history table(2nd table). Please help!

CREATE Procedure spUpdate_bag_data
@.t1 int OUT
AS

declare @.work_ord_num char(9), @.two char(7), @.work_ord_line_num char(3), @.cust_num char(5), @.cust_name char(50), @.apple_part_num char(12), @.apple_catalog_num char(28);

Declare update_bag CURSOR
FOR
SELECT work_ord_num, work_ord_line_num
FROM tblBag_data
WHERE cust_num IS NULL;

OPEN update_bag
FETCH NEXT FROM update_bag INTO @.work_ord_num, @.work_ord_line_num

WHILE @.@.FETCH_STATUS = 0 --and @.counter<30
BEGIN
--set @.counter = @.counter + 1
SET @.two = LEFT(@.work_ord_num,6) + '%'
set @.cust_num = '';

SELECT @.cust_num = cust_num, @.cust_name = cust_name, @.apple_part_num = apple_part_num, @.apple_catalog_num = apple_catalog_num
FROM tblShipping_sched
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;



IF @.@.RowCount > 0
BEGIN

UPDATE tblBag_data
SET cust_num = @.cust_num, cust_name = @.cust_name, apple_part_num = @.apple_part_num, apple_catalog_num = @.apple_catalog_num
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;
END

ELSE
BEGIN

SELECT cust_num = @.cust_num, cust_name =@.cust_name, apple_part_num =@.apple_part_num, apple_catalog_num = @.apple_catalog_num FROM tblShipment_history
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;

IF @.cust_num IS NOT NULL and len(@.cust_num)= 5
UPDATE tblBag_data SET cust_num = @.cust_num, cust_name = @.cust_name, apple_part_num = @.apple_part_num, apple_catalog_num = @.apple_catalog_num
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;

END

FETCH NEXT FROM update_bag INTO @.work_ord_num, @.work_ord_line_num
END

close update_bag
deallocate update_bag

return(1)Why are you using a cursor? There's no need.

Also if
work_ord_num AND wrk_ord_line_num

are not the primary or a unique constraint to
FROM tblShipping_sched

Then you can get back multiple rows...and your assingment to the variables will be the last one returned...

And since there is no input variable to the sproc, it means you'll be doing every row in the table every time you run it...
well where cust_num is null

Need to see the DDL for the three tables...(sample data wouldn't hurt either)|||Originally posted by Brett Kaiser
Why are you using a cursor? There's no need.

Also if
work_ord_num AND wrk_ord_line_num

are not the primary or a unique constraint to
FROM tblShipping_sched

Then you can get back multiple rows...and your assingment to the variables will be the last one returned...

And since there is no input variable to the sproc, it means you'll be doing every row in the table every time you run it...
well where cust_num is null

Need to see the DDL for the three tables...(sample data wouldn't hurt either)

Hi Brett,

I have a tblBag_data that needs four fields populated from the tblshippping_sched or tblShipment_history. The stored procedure is takes the work_ord_num and work_ord_line_num in tblBag_data and match them to the tblshipping_sched if the cust_num is null. It loops thru the tblshipping_sched for that record if it finds the record it populate the four fields(cust_name, cust_num..)in the tblBag_data. But if it doesn't find it it suppose to go to tblShipment_history table and loops thru for the same record and populates the tblBag_data once it finds it.

The If statement seems to be working fine. But else is definitely not working. If there is better way to write this without cursor please provide some sample code.

Thank you.
I hope it|||Sorry...work got in the way...

How about:

UPDATE l
SET cust_num = r.cust_num
, cust_name = r.cust_name
, apple_part_num = r.apple_part_num
, apple_catalog_num = r.apple_catalog_num
FROM tblBagData l
INNER JOIN tblBagData r
ON r.work_ord_num like LEFT(l.work_ord_num,6) + '%'
AND r.work_ord_line_num = l.work_ord_line_num
WHERE cust_num IS NULL

And you don't even have to worry if it finds it ot not because youcan then just do the second query, because the cust_num will still be null|||Originally posted by Brett Kaiser
Sorry...work got in the way...

How about:

UPDATE l
SET cust_num = r.cust_num
, cust_name = r.cust_name
, apple_part_num = r.apple_part_num
, apple_catalog_num = r.apple_catalog_num
FROM tblBagData l
INNER JOIN tblBagData r
ON r.work_ord_num like LEFT(l.work_ord_num,6) + '%'
AND r.work_ord_line_num = l.work_ord_line_num
WHERE cust_num IS NULL

And you don't even have to worry if it finds it ot not because youcan then just do the second query, because the cust_num will still be null

Hmm... This might be a solution.
I'll give it a try.
Thanks!

else if statement


I have this function and is given me the error below the code and want to know which is my mistake

create function pt
(@.idpt INT,
@.option INT)
Returns Real
AS
Begin
if (@.option=1)
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from electron as a
where @.idpt = a.idap)
end
else if(@.option=2)
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from muon as a
where @.idpt = a.idap)
End
else
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from jet as a
where @.idpt = a.idap)
end
end

Msg 455, Level 16, State 2, Procedure pt, Line 22
The last statement included within a function must be a return statement.
Try putting a dummy "RETURN NULL" after the large IF block. I think the compiler isn't realizing that at least one of the 3 return statements you've got will always be called, and it's throwing a fit. The extra RETURN will never get called, but it'll at least keep the compiler happy.
|||

Try the code below.

Chris

Code Snippet

CREATE FUNCTION pt (@.idpt INT, @.option INT)

RETURNS REAL

AS

BEGIN

DECLARE @.Output REAL

IF (@.option = 1)

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM electron AS a

WHERE @.idpt = a.idap

END

ELSE

IF (@.option = 2)

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM muon AS a

WHERE @.idpt = a.idap

END

ELSE

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM jet AS a

WHERE @.idpt = a.idap

END

RETURN @.Output

END

Eliminating spaces in query output

Is there a way that I can do a select statement that will just select the text within a field. ie. If the field is 200 wide, but the actual text is only 20 characters, can I select JUST the 20 characters with nothing else? I need to do this for a huge number of rows ....

Also, I'm running the select statement via osql if that helps.When selecting the column name you can use the trim function, e.g.

select rtrim(ltrim(emp_name))
from employees

This will trim blank spaces from the left and right hand ends of the character string. If you only need to clear trailing blanks just use the rtrim function.|||RTRIM was just what I needed, thank you.

Sunday, February 19, 2012

Efficient way than IN statement

I have two tables JDECurrencyRates and JDE Currency Conversion

I want to insert all the records all the records from JDECurrencyRates to JDECurrencyConversion that does not exists in JDECurrencyConversion table. For matching I am to use three keys i.e. FromCurrency, TO Currency and Effdate

To achieve this task i wrote the following query

INSERT INTO PresentationEurope.dbo.JDECurrencyConversion(Date,FromCurrency,FromCurrencyDesc,

ToCurrency, ToCurrencyDesc, EffDate, FromExchRate, ToExchRate,CreationDatetime

,ChangeDatetime)

(SELECT effdate as date, FromCurrency, FromCurrencyDesc, ToCurrency, ToCurrencyDesc, EffDate,

FromExchRate, ToExchRate, GETDATE(),GETDATE() FROM MAINTENANCE.DBO.JDECURRENCYRATES

WHERE FROMCURRENCY NOT IN (SELECT FromCurrency FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR TOCURRENCY NOT IN (SELECT TOCURRENCY FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR EFFDATE NOT IN (SELECT EFFDATE FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION))

Can any one suggest me the better way to accomplish this task or this query is OK (or efficient enough)

Hi

I think a more efficient way would be to use LEFT OUTER JOIN on your 3 keys and checking for missing values

INSERT INTO ....
SELECT .....
FROM JDECurrencyRates cr
LEFT OUTER JOIN JDECurrencyConversion cc
ON cr.FromCurrency = cc.FromCurrency
AND cr.TOCURRENCY = cc.TOCURRENCY
AND cr.EFFDATE = cc.EFFDATE
WHERE cc.FromCurrency IS NULL

Any field should do in the WHERE clause

NB.
|||

thanx very much for providing efficient way

yes! i have tried and this works fine

|||

Using LEFT JOIN with IS NULL is not the efficient way. NOT EXISTS is the fastest way to perform these type of checks (absence or existence of rows). SQL Server 2000 & 2005 will typically generate the same plan for NOT IN / NOT EXISTS and IN/EXISTS queries. But using EXISTS/NOT EXISTS is always safer because you may not get into situation where you will get wrong results (in case of NOT IN/IN) due to NULL values. The use of LEFT JOIN with NULL check uses more operators than a NOT EXISTS query.

Compare the estimated query plan costs of the queries below:

-- Get list of authors with no titles:

select *
from pubs.dbo.authors as a
where not exists(select * from pubs.dbo.titleauthor as ta
where ta.au_id = a.au_id)

select *
from pubs.dbo.authors as a
left join pubs.dbo.titleauthor as ta
on ta.au_id = a.au_id
where ta.au_id is null

go

-- Get list of tables with no indexes:

select t.object_id
from sys.tables as t
where not exists(select *
from sys.indexes as i
where i.object_id = t.object_id)

select t.object_id
from sys.tables as t
left join sys.indexes as i
on i.object_id = t.object_id
where i.object_id is null

go

And depending on your indexes, data and columns referenced in your queries the performance could be even worse. At best, the LEFT JOIN with IS NULL check approach will be as close to the NOT EXISTS query. So you should write your INSERT...SELECT like:

INSERT INTO PresentationEurope.dbo.JDECurrencyConversion

(Date,FromCurrency,FromCurrencyDesc,

ToCurrency, ToCurrencyDesc, EffDate, FromExchRate, ToExchRate,

CreationDatetime ,ChangeDatetime)

SELECT effdate as date, FromCurrency, FromCurrencyDesc

, ToCurrency, ToCurrencyDesc, EffDate, FromExchRate

, ToExchRate, GETDATE(),GETDATE()

FROM MAINTENANCE.DBO.JDECURRENCYRATES as j

WHERE NOT EXISTS(

SELECT *

FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION as j1

WHERE j1.FromCurrency = j.FROMCURRENCY

AND j1.TOCURRENCY = j.TOCURRENCY

AND j1.EFFDATE = j.EFFDATE

))

|||

(excuse bad english)

I think that the problem resides in the absent of association between "from..." and "toex" tables. Maybe the day (date) would be the same. Did the number os records inserted exceeded ?

Marcelo

Friday, February 17, 2012

Efficiency of INSERT with multiple rows

Let's say that I am inserting rows into a table via a statement like this
INSERT INTO A SELECT * FROM B
And let's say that table B has 1,000,000 rows
I was just wondering if SQL Server rebuilds the indexes on table A after
each row is inserted or if it waits until all 1,000,000 rows are inserted
and then rebuilds the indexes at the end? If it rebuilds the indexes upon
each insert then I would probably drop the indexes first and then just add
them at the end.
Does anybody know?
Thanks
Richard Speiss"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
THe index is not 'rebuild' upon each insert. It is updated on each insert.
Since 'table splits' on indexes can be costly, it is sometimes, a good idea
to remove the index.
But that depends, if you have a very large table with bilions of rows and
remove and reapply the index :<<
ps: For batch inserting and bulking rows, always initiate an explicit
transation!

> Does anybody know?
> Thanks
> Richard Speiss
>|||Oops, I did mean does the index get updated (bad wording on my part. I
didn't expect the entire thing to be rebuilt).
Thanks for the reply
Richard

> THe index is not 'rebuild' upon each insert. It is updated on each insert.
> Since 'table splits' on indexes can be costly, it is sometimes, a good
idea
> to remove the index.
> But that depends, if you have a very large table with bilions of rows and
> remove and reapply the index :<<
> ps: For batch inserting and bulking rows, always initiate an explicit
> transation!
>|||Richard,
SQL Server does not rebuild the indexes as data is inserted, rather the
indexes are "maintained". i.e. if you insert a new row into the table, then
all indexes on the table will need to have the index pages maintained at the
same time.
Sometimes you will find that dropping all indexes, doing a large insert, and
then creating the index after the load is quicker than doing the load with
the indexes defined. Sometimes you won't. Only testing will reveal which way
will be quick for your environment.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
> Does anybody know?
> Thanks
> Richard Speiss
>

Efficiency of INSERT with multiple rows

Let's say that I am inserting rows into a table via a statement like this
INSERT INTO A SELECT * FROM B
And let's say that table B has 1,000,000 rows
I was just wondering if SQL Server rebuilds the indexes on table A after
each row is inserted or if it waits until all 1,000,000 rows are inserted
and then rebuilds the indexes at the end? If it rebuilds the indexes upon
each insert then I would probably drop the indexes first and then just add
them at the end.
Does anybody know?
Thanks
Richard Speiss
"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
THe index is not 'rebuild' upon each insert. It is updated on each insert.
Since 'table splits' on indexes can be costly, it is sometimes, a good idea
to remove the index.
But that depends, if you have a very large table with bilions of rows and
remove and reapply the index :<<
ps: For batch inserting and bulking rows, always initiate an explicit
transation!

> Does anybody know?
> Thanks
> Richard Speiss
>
|||Oops, I did mean does the index get updated (bad wording on my part. I
didn't expect the entire thing to be rebuilt).
Thanks for the reply
Richard

> THe index is not 'rebuild' upon each insert. It is updated on each insert.
> Since 'table splits' on indexes can be costly, it is sometimes, a good
idea
> to remove the index.
> But that depends, if you have a very large table with bilions of rows and
> remove and reapply the index :<<
> ps: For batch inserting and bulking rows, always initiate an explicit
> transation!
>
|||Richard,
SQL Server does not rebuild the indexes as data is inserted, rather the
indexes are "maintained". i.e. if you insert a new row into the table, then
all indexes on the table will need to have the index pages maintained at the
same time.
Sometimes you will find that dropping all indexes, doing a large insert, and
then creating the index after the load is quicker than doing the load with
the indexes defined. Sometimes you won't. Only testing will reveal which way
will be quick for your environment.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
> Does anybody know?
> Thanks
> Richard Speiss
>

Efficiency of INSERT with multiple rows

Let's say that I am inserting rows into a table via a statement like this
INSERT INTO A SELECT * FROM B
And let's say that table B has 1,000,000 rows
I was just wondering if SQL Server rebuilds the indexes on table A after
each row is inserted or if it waits until all 1,000,000 rows are inserted
and then rebuilds the indexes at the end? If it rebuilds the indexes upon
each insert then I would probably drop the indexes first and then just add
them at the end.
Does anybody know?
Thanks
Richard Speiss"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
THe index is not 'rebuild' upon each insert. It is updated on each insert.
Since 'table splits' on indexes can be costly, it is sometimes, a good idea
to remove the index.
But that depends, if you have a very large table with bilions of rows and
remove and reapply the index :<<
ps: For batch inserting and bulking rows, always initiate an explicit
transation!
> Does anybody know?
> Thanks
> Richard Speiss
>|||Oops, I did mean does the index get updated (bad wording on my part. I
didn't expect the entire thing to be rebuilt).
Thanks for the reply
Richard
> THe index is not 'rebuild' upon each insert. It is updated on each insert.
> Since 'table splits' on indexes can be costly, it is sometimes, a good
idea
> to remove the index.
> But that depends, if you have a very large table with bilions of rows and
> remove and reapply the index :<<
> ps: For batch inserting and bulking rows, always initiate an explicit
> transation!
>|||Richard,
SQL Server does not rebuild the indexes as data is inserted, rather the
indexes are "maintained". i.e. if you insert a new row into the table, then
all indexes on the table will need to have the index pages maintained at the
same time.
Sometimes you will find that dropping all indexes, doing a large insert, and
then creating the index after the load is quicker than doing the load with
the indexes defined. Sometimes you won't. Only testing will reveal which way
will be quick for your environment.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Richard Speiss" <rspeiss@.mtxinc.com> wrote in message
news:utEtTH6NEHA.2468@.TK2MSFTNGP11.phx.gbl...
> Let's say that I am inserting rows into a table via a statement like this
> INSERT INTO A SELECT * FROM B
> And let's say that table B has 1,000,000 rows
> I was just wondering if SQL Server rebuilds the indexes on table A after
> each row is inserted or if it waits until all 1,000,000 rows are inserted
> and then rebuilds the indexes at the end? If it rebuilds the indexes upon
> each insert then I would probably drop the indexes first and then just add
> them at the end.
> Does anybody know?
> Thanks
> Richard Speiss
>