Sunday, February 26, 2012

EM and the Resource consumption.....

If the Enterprise Manger is running and the registered servers will be
resource consuming issue for the server. We have couple of developers
connected to the server all the time using EM.
Lately I read an article on Management Studio for SQL Server 2005 which
handles this issue.
How much it is contributing to the resources usage of SQL Server? I am
thinking all possible way to improve the performance of our SQL Server.
Thank you,
AlwinHi
EM does not consume more resources than a normal client application
connection to SQL Server.
As long as you don't use the "open table" to look at data, you are fine.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Alwin Mark" <AMark_SFO@.hotmail.com> wrote in message
news:eygY6GatFHA.2472@.TK2MSFTNGP14.phx.gbl...
> If the Enterprise Manger is running and the registered servers will be
> resource consuming issue for the server. We have couple of developers
> connected to the server all the time using EM.
> Lately I read an article on Management Studio for SQL Server 2005 which
> handles this issue.
> How much it is contributing to the resources usage of SQL Server? I am
> thinking all possible way to improve the performance of our SQL Server.
> Thank you,
> Alwin
>

EM and QA displays different format for a date field

Hi,
I have a datetime field with a value '1/1/02'. EM display it as 1/1/02 But
Query Analyzer displays it as 2002-01-01 00:00:00.000 (I used SELECT * FROM
Table1)
So in Query Analyzer, I can't tell the fomat of it. I just wonder if your
PC does the same.
Thanks.You have a problem under standing of what a data value is and how it is
stored. It is actually just stored as numeric value, denoting offsets from
some time in the reasonably distant past. Format is based on the user. QA
uses a standard value, mostly because when you are using QA the goal is
usually to see ALL of the data.
Enterprise manager is probably using your system settings. When you
actually use the data, you can format it as you wish.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Chrissi" <anubisofthydeath@.hotmail.com> wrote in message
news:OqYzX0yUFHA.2136@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I have a datetime field with a value '1/1/02'. EM display it as 1/1/02
> But Query Analyzer displays it as 2002-01-01 00:00:00.000 (I used SELECT *
> FROM Table1)
> So in Query Analyzer, I can't tell the fomat of it. I just wonder if your
> PC does the same.
> Thanks.
>|||Chrissi,
Different clients display things differently, with the possible
excepting of strings made up of printable ASCII with codes
from 32 to 127.
If you want the datetime displayed in a particular format, I'd
suggest you select CONVERT(varchar(40),yourDatetimeColumn,X
XXX),
where XXXX is the appropriate SQL Server format code (see the
Books Online article CAST and CONVERT).
If you do this, you won't be able to "edit" the datetime values
in Enterprise Manager, but that's not a good idea anyway. Better
to make changes programmatically with UPDATE or INSERT
statements.
Steve Kass
Drew University
Chrissi wrote:

>Hi,
>I have a datetime field with a value '1/1/02'. EM display it as 1/1/02 Bu
t
>Query Analyzer displays it as 2002-01-01 00:00:00.000 (I used SELECT * FROM
>Table1)
>So in Query Analyzer, I can't tell the fomat of it. I just wonder if your
>PC does the same.
>Thanks.
>
>

EM and QA access different

Users (other than sysadmin) can query tables in QA just fine, but when they
try to view the same table in EM, they get an error that says they "might no
t
have permission to perform this operation, or the object XXX might no longer
exist in the database."
Any ideas?RP
How do you query the table? by stored procedure or by querying underlying
table?
"RP" <RP@.discussions.microsoft.com> wrote in message
news:19B8A84F-3E22-455D-9792-BD4AB4F5E7AF@.microsoft.com...
> Users (other than sysadmin) can query tables in QA just fine, but when
they
> try to view the same table in EM, they get an error that says they "might
not
> have permission to perform this operation, or the object XXX might no
longer
> exist in the database."
> Any ideas?|||just a regular query of the table
"Uri Dimant" wrote:

> RP
> How do you query the table? by stored procedure or by querying underlying
> table?
>
> "RP" <RP@.discussions.microsoft.com> wrote in message
> news:19B8A84F-3E22-455D-9792-BD4AB4F5E7AF@.microsoft.com...
> they
> not
> longer
>
>

EM alternative

Hello,
Is there any alternative to Enterprise Manager?
All links will be appreciated.
Thank you,
3Dimka.
>Is there any alternative to Enterprise Manager?
>All links will be appreciated.
>
Toad (familiar in Oracle environs) has a freely downloadable version for
SQLServer.
http://www.toadsoft.com/toadsqlserve..._sqlserver.htm
|||Hi,
1. You can use the free tool from Microsoft named "Web data Administrator"
Download the tool from
http://www.microsoft.com/downloads/d...798-C57A-419E-
ACBC-2A332CB7F959&displaylang=en
2. There is very good tool DBArtisan from Embarcadero technologies.
http://www.embarcadero.com/products/dbartisan/
Thanks
Hari
MCDBA
"Dmitriy Bessmertnyy" <dmitriy@.compuweigh.com> wrote in message
news:uw8UKKNTEHA.1508@.TK2MSFTNGP11.phx.gbl...
> Hello,
> Is there any alternative to Enterprise Manager?
> All links will be appreciated.
> Thank you,
> 3Dimka.
>
|||> Is there any alternative to Enterprise Manager?
> All links will be appreciated.
You should try myLittleAdmin (http://www.myLittleTools.net/mla_sql)
You can test it online (http://www.myLittleTools.net/livedemo/mla_sql)
or download a free lite version.
Best regards
Elian Chrebor
// myLittleTools.net : leading provider of web-based applications.
// myLittleAdmin : online MS SQL manager
// http://www.mylittletools.net
// webmaster@.mylittletools.net
|||Hi,

> Is there any alternative to Enterprise Manager?
> All links will be appreciated.
Give Database Workbench a try - www.upscene.com
With regards,
Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com
|||Wow
That is one useful program
Best of all, it's free!
Thanks for posting that
"ChrisRath" wrote:

> Toad (familiar in Oracle environs) has a freely downloadable version for
> SQLServer.
> http://www.toadsoft.com/toadsqlserve..._sqlserver.htm
>

EM access error when DB name is a GUID

I am receiving a connection error when I use EM to try to
open a table is my database. I am connecting to SQL Server
as a SA user. I don't have any access errors if I use
Query Analyzer. The problem occurs trying to access tables
in a database that has a GUID for the database name
{910216B1-6186-4F13-A626-D1D4D811F3B1} The brackets are
part of the name. Any GUID name will fail. All other EM
functions work correctly. EM works correctly for all non-
GUID named databases.
The error message is "Connection failed: SQLState '08004'
SQL Server Error: 4060 Server rejected the connection;
Access to selected database has been denied.
I am running SQL Server 2000, version 8.00.818 on a
Windows 2000 server system with SP 4. The same thing
happens on a Windows XP Pro system running SQL Server 2000
Tobin Schuster wrote:
> I am receiving a connection error when I use EM to try to
> open a table is my database. I am connecting to SQL Server
> as a SA user. I don't have any access errors if I use
> Query Analyzer. The problem occurs trying to access tables
> in a database that has a GUID for the database name
> {910216B1-6186-4F13-A626-D1D4D811F3B1} The brackets are
> part of the name. Any GUID name will fail. All other EM
> functions work correctly. EM works correctly for all non-
> GUID named databases.
> The error message is "Connection failed: SQLState '08004'
> SQL Server Error: 4060 Server rejected the connection;
> Access to selected database has been denied.
> I am running SQL Server 2000, version 8.00.818 on a
> Windows 2000 server system with SP 4. The same thing
> happens on a Windows XP Pro system running SQL Server 2000
Since the name you are using contains invalid characters, it must be
delimited. I'm guessing that SQL EM is not putting brackets around the
name to get past this.
David Gugick
Imceda Software
www.imceda.com

EM - question

Hi,
we are using SQL 2000.
When I open the EM, expand the tree for databases, so I see all the
databases.It's possible to make some database groups?
I mean make just group called for example "Jakub" and in this group I will
have all my databases, then next group called "Vrata" and there will be some
other databases ...etc
thanks for answers
JakubUnfortunately, that feature is not available.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Jakub Novak" <jnovak@.memrb.cz> wrote in message
news:u2G$JGn2DHA.3656@.TK2MSFTNGP11.phx.gbl...
Hi,
we are using SQL 2000.
When I open the EM, expand the tree for databases, so I see all the
databases.It's possible to make some database groups?
I mean make just group called for example "Jakub" and in this group I will
have all my databases, then next group called "Vrata" and there will be some
other databases ...etc
thanks for answers
Jakub

EM - question

Hi,
we are using SQL 2000.
When I open the EM, expand the tree for databases, so I see all the
databases.It's possible to make some database groups?
I mean make just group called for example "Jakub" and in this group I will
have all my databases, then next group called "Vrata" and there will be some
other databases ...etc
thanks for answers
JakubUnfortunately, that feature is not available.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Jakub Novak" <jnovak@.memrb.cz> wrote in message
news:u2G$JGn2DHA.3656@.TK2MSFTNGP11.phx.gbl...
Hi,
we are using SQL 2000.
When I open the EM, expand the tree for databases, so I see all the
databases.It's possible to make some database groups?
I mean make just group called for example "Jakub" and in this group I will
have all my databases, then next group called "Vrata" and there will be some
other databases ...etc
thanks for answers
Jakub

EM - Open table - Return Top... -1000

Is there a way to change the Open table - Return Top... -1000 default
to something like 10. It should return only 10 by default? Any registry
keys?<geethacharan@.gmail.com> wrote in message
news:1097703679.181878.189490@.f14g2000cwb.googlegr oups.com...
> Is there a way to change the Open table - Return Top... -1000 default
> to something like 10. It should return only 10 by default? Any registry
> keys?

It doesn't look like it - there's no obvious option or registry key that
changes this.

Simon

EM - Current Activity

I manage a data warehouse. Each month I potentially update
millions of records (only if they've changed from the
previous month). You can imagine this update process is
quite time consuming. I've got the tables spread accross 2
to 3 drives via File Groups. The table I update from is
either a data type table variable, temp table or a table
on it's on drive with a clustered index. I've got 2 gigs
of RAM for SQL (Std Edition)but a total of 4, SQL's
priority is boosted, plenty of hard drive space, 4
processors. In EM's current activity, under Physical IO I
seem to have a bottleneck. My wait times for one process
(multiple instances) can easily be in the millions. When I
use Perf Mon to view disk activity (read\write) at least
one usually 2 drives are maxed out. Any thoughts on how I
can improve performance?Liz,
Well one way is ensure the log file is on a separate Raid 1 than anything
else. When you say you have the data spread across 3 drives do you mean 3
individual physical drives or 3 drive array's? 2 or 3 physical drives are
usually not enough to get good performance out of a typical DW application.
The more drives the better. For updates it is very important to have the
log on a separate array than the data. You can also try doing the updates
in smaller batches and see if that speeds things up but you will still get
high disk activity.
--
Andrew J. Kelly
SQL Server MVP
"Liz" <anonymous@.discussions.microsoft.com> wrote in message
news:257201c3af73$a15507b0$a601280a@.phx.gbl...
> I manage a data warehouse. Each month I potentially update
> millions of records (only if they've changed from the
> previous month). You can imagine this update process is
> quite time consuming. I've got the tables spread accross 2
> to 3 drives via File Groups. The table I update from is
> either a data type table variable, temp table or a table
> on it's on drive with a clustered index. I've got 2 gigs
> of RAM for SQL (Std Edition)but a total of 4, SQL's
> priority is boosted, plenty of hard drive space, 4
> processors. In EM's current activity, under Physical IO I
> seem to have a bottleneck. My wait times for one process
> (multiple instances) can easily be in the millions. When I
> use Perf Mon to view disk activity (read\write) at least
> one usually 2 drives are maxed out. Any thoughts on how I
> can improve performance?

EM - connect to correct server

Hello -
I can connect to other servers through QA with no problem, however, when I
try to access another server through Enterprise Manager, I cannot find where
I can change the server. It is defaulting to my local machine.
Can someone tell me how to do this?
Any help is greatly appreciated!
SandyIn EM, right click on your LOCAL connection. When the menu comes up you hav
e
two option "Edit SQL Server Registration properites" or "New SQL Server
Registration". Use the "New.." item to create a new item to your other
server, or if you what to change the LOCAL registration ot point to a
different server then pick the "Edit..." item.
"Sandy" wrote:

> Hello -
> I can connect to other servers through QA with no problem, however, when I
> try to access another server through Enterprise Manager, I cannot find whe
re
> I can change the server. It is defaulting to my local machine.
> Can someone tell me how to do this?
> Any help is greatly appreciated!
> --
> Sandy|||Check out in the BOL:
Administering SQL Server -> Managing Servers -> registering Servers
-Omnibuzz (The SQL GC)
http://omnibuzz-sql.blogspot.com/|||Thanks for your response, Greg!
Does this just register it on my local machine so I can access and use it?
In other words, am I changing any settings of the particular instance of Sql
Server on the server by doing this?
Also, it does not give any options for using Mixed Mode. The instance of
Sql Server I'm using is Mixed. How would you handle this situation?
Thanks again,
--
Sandy
"Greg Larsen" wrote:
> In EM, right click on your LOCAL connection. When the menu comes up you h
ave
> two option "Edit SQL Server Registration properites" or "New SQL Server
> Registration". Use the "New.." item to create a new item to your other
> server, or if you what to change the LOCAL registration ot point to a
> different server then pick the "Edit..." item.
> "Sandy" wrote:
>

EM

Can I give EM to a customer for his use to interface with a remotely
located sql server if both are legal? If no, does he need to purchase
sql server to have use of a legal copy of EM?Hi
For licensing issues, please refer this

http://chanduas.blogspot.com/2005/0...ensing-faq.html

best Regards,
Chandra
http://groups.msn.com/SQLResource/
http://chanduas.blogspot.com/
------------

*** Sent via Developersdex http://www.developersdex.com ***|||Unless you are using MSDE the customer has to be covered by a licence
(either CAL or processor licence) in order to use SQL Server whether he
has EM or not. If the customer is licensed he's entitled to install the
client tools, including EM.

If it's MSDE then you cannot use EM but there are EM replacement
utilities available for MSDE.

--
David Portas
SQL Server MVP
--|||David Portas wrote:
> Unless you are using MSDE the customer has to be covered by a licence
> (either CAL or processor licence) in order to use SQL Server whether he
> has EM or not.

Sorry it wasn't clear, but in my case, sql server is on my remote
website. So, as an analogy, when you access Amazon.com to buy a book
or whatever, a customer doesn't need a database license to do that.
The customer doesn't see the database, all he sees is the web front
end.

However, I would like the customer to be able to load some large
amounts of data himself and I may not be able to set up a share
effectively on my website to let DTS load these
ftp'ed-to-my-website-files.

>If the customer is licensed he's entitled to install the
> client tools, including EM.

Basically what I'm hearing here is if the customer doesn't have a sql
server license then under no conditions can he legally install EM (to
set up DTS data trasfers).|||> Sorry it wasn't clear, but in my case, sql server is on my remote
> website. So, as an analogy, when you access Amazon.com to buy a book
> or whatever, a customer doesn't need a database license to do that.

If your site is publicly accessible over the internet that case you
must have a *Processor* licence. A processor licence is required for
any public web-site that utilizes SQL Server. If you have a Processor
licence then in theory remote users ARE licensed to install the client
tools. However, you won't be able to do this over public networks
because the ports used by SQL will be blocked by your firewall and/or
your ISP or your customer's ISP. To do this you'll need a VPN in place
between you and the customer. In that case though, a better option is
probably remote access via Terminal Server or similar because EM
doesn't behave well over a low-bandwidth network.

--
David Portas
SQL Server MVP
--|||Just for clarification. If your site is not a public one and you are
not using a processor licence, you can still install EM at the
customer. In that case you would have to have some mechanism to
restrict access to the site (such as VPN). You would also need to
purchase a CAL for the customer, but that applies whether or not they
are using EM because any access to the database server must be covered
by a licence.

That's my understanding but refer to the licensing docs or to Microsoft
for the definitive answers.

--
David Portas
SQL Server MVP
--

Elusive Backup Process

Howdy Folks,

I have recently taken over looking after this machine & the previous guardian is unavailable. I've got to a point where I'm starting to think about looking into MSDB, but am not overly keen to start reviewing this db for obvious reasons... Can anyone please offer advise ?

Thank In Advance
Suze.

Environment: Microsoft SQL Server 2000 - 8.00.760 (sp3) on Win2000
H/W Dell PowerEdge 2550

Issue: Discovered in the sql log that there is a second set of bkps occurring around 7:30am - the bkps we have setup, run at 2am & are also being logged to sql log as completing normally.

The 7:30am bkps appear to be backing up to a remote location using pipes... or at a guess an agent ?? (I don't know know what that looks like cos we don't use agents). We backup db's to disk and then let arcserve pick up those backup files later.

I assume that this has been occurring for some time, sql logs back to July 7 advise so... Unable to locate scheduled job / maintenance plan that runs at this time...

Last night I set up a trace to catch it but that hasn't helped me either, apart from telling me that its an internal process to the server, and that the process actually started at 6:50am.

SQL LOGS Entries between midnight & 8am in attached file:

Trace contents in attached zip file:Someone had the sql agent installed on this server previously. It appears that the NT backup person still had it selected in the backup as well as the backups to disk. We have now removed this & shall wait and see tonight but I guess I now know what a remote agent arcserve backup looks like.

Cheers

ELSEIF in Stored Procedures?

I have found some documentation regarding the use of IF...ELSE statements in stored procedures, but what about multiple condition statements? For example, say I need 3 unique fields in my table. If my application passes a value that is a duplicate in one of the columns, the stored proc will fail, but it is difficult to know which item caused the failure and therefore difficult for the user to get a meaningful error message in order to correct their input. I am thinking I could just make a conditional statement that applies a code to an OUTPUT parameter in order to clarify the error:

(pseudo code)

if @.field1 already exists then @.output = '1';terminate stored procedure

elseif @.field2 already exists then @.output = '2';terminate stored procedure

elseif @.field3 already exists then @.output = '3';terminate stored procedure

else finish the insert

(end pseudo code)

Are 'elseif' statements allowed in SQL Server? Am I going about this in the wrong way?

Jungalist wrote:

Are 'elseif' statements allowed in SQL Server?

Sort of. You can impletemt the logic as follows :

IF @.field1 already exists

BEGIN

SET @.OUTPUT = 1

END

ELSE

IF @.field2 already exists

BEGIN

SET @.OUTPUT=2

END

ELSE

IF @.field3 already exists

BEGIN

SET @.OUTPUT=3

END

check out books on line for "IF ELSE"

|||

Thank-you. I was searching for the wrong terms. I appreciate the help.

Else statement is not working in the stored procedure

I have a stored procedure that needs to populate the fields of records for tblBag_data from the tblshipping_sched if not found it looks in the tblshipment_history. But it not checking the history table(2nd table). Please help!

CREATE Procedure spUpdate_bag_data
@.t1 int OUT
AS

declare @.work_ord_num char(9), @.two char(7), @.work_ord_line_num char(3), @.cust_num char(5), @.cust_name char(50), @.apple_part_num char(12), @.apple_catalog_num char(28);

Declare update_bag CURSOR
FOR
SELECT work_ord_num, work_ord_line_num
FROM tblBag_data
WHERE cust_num IS NULL;

OPEN update_bag
FETCH NEXT FROM update_bag INTO @.work_ord_num, @.work_ord_line_num

WHILE @.@.FETCH_STATUS = 0 --and @.counter<30
BEGIN
--set @.counter = @.counter + 1
SET @.two = LEFT(@.work_ord_num,6) + '%'
set @.cust_num = '';

SELECT @.cust_num = cust_num, @.cust_name = cust_name, @.apple_part_num = apple_part_num, @.apple_catalog_num = apple_catalog_num
FROM tblShipping_sched
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;



IF @.@.RowCount > 0
BEGIN

UPDATE tblBag_data
SET cust_num = @.cust_num, cust_name = @.cust_name, apple_part_num = @.apple_part_num, apple_catalog_num = @.apple_catalog_num
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;
END

ELSE
BEGIN

SELECT cust_num = @.cust_num, cust_name =@.cust_name, apple_part_num =@.apple_part_num, apple_catalog_num = @.apple_catalog_num FROM tblShipment_history
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;

IF @.cust_num IS NOT NULL and len(@.cust_num)= 5
UPDATE tblBag_data SET cust_num = @.cust_num, cust_name = @.cust_name, apple_part_num = @.apple_part_num, apple_catalog_num = @.apple_catalog_num
WHERE work_ord_num like @.two AND work_ord_line_num = @.work_ord_line_num;

END

FETCH NEXT FROM update_bag INTO @.work_ord_num, @.work_ord_line_num
END

close update_bag
deallocate update_bag

return(1)Why are you using a cursor? There's no need.

Also if
work_ord_num AND wrk_ord_line_num

are not the primary or a unique constraint to
FROM tblShipping_sched

Then you can get back multiple rows...and your assingment to the variables will be the last one returned...

And since there is no input variable to the sproc, it means you'll be doing every row in the table every time you run it...
well where cust_num is null

Need to see the DDL for the three tables...(sample data wouldn't hurt either)|||Originally posted by Brett Kaiser
Why are you using a cursor? There's no need.

Also if
work_ord_num AND wrk_ord_line_num

are not the primary or a unique constraint to
FROM tblShipping_sched

Then you can get back multiple rows...and your assingment to the variables will be the last one returned...

And since there is no input variable to the sproc, it means you'll be doing every row in the table every time you run it...
well where cust_num is null

Need to see the DDL for the three tables...(sample data wouldn't hurt either)

Hi Brett,

I have a tblBag_data that needs four fields populated from the tblshippping_sched or tblShipment_history. The stored procedure is takes the work_ord_num and work_ord_line_num in tblBag_data and match them to the tblshipping_sched if the cust_num is null. It loops thru the tblshipping_sched for that record if it finds the record it populate the four fields(cust_name, cust_num..)in the tblBag_data. But if it doesn't find it it suppose to go to tblShipment_history table and loops thru for the same record and populates the tblBag_data once it finds it.

The If statement seems to be working fine. But else is definitely not working. If there is better way to write this without cursor please provide some sample code.

Thank you.
I hope it|||Sorry...work got in the way...

How about:

UPDATE l
SET cust_num = r.cust_num
, cust_name = r.cust_name
, apple_part_num = r.apple_part_num
, apple_catalog_num = r.apple_catalog_num
FROM tblBagData l
INNER JOIN tblBagData r
ON r.work_ord_num like LEFT(l.work_ord_num,6) + '%'
AND r.work_ord_line_num = l.work_ord_line_num
WHERE cust_num IS NULL

And you don't even have to worry if it finds it ot not because youcan then just do the second query, because the cust_num will still be null|||Originally posted by Brett Kaiser
Sorry...work got in the way...

How about:

UPDATE l
SET cust_num = r.cust_num
, cust_name = r.cust_name
, apple_part_num = r.apple_part_num
, apple_catalog_num = r.apple_catalog_num
FROM tblBagData l
INNER JOIN tblBagData r
ON r.work_ord_num like LEFT(l.work_ord_num,6) + '%'
AND r.work_ord_line_num = l.work_ord_line_num
WHERE cust_num IS NULL

And you don't even have to worry if it finds it ot not because youcan then just do the second query, because the cust_num will still be null

Hmm... This might be a solution.
I'll give it a try.
Thanks!

else if statement


I have this function and is given me the error below the code and want to know which is my mistake

create function pt
(@.idpt INT,
@.option INT)
Returns Real
AS
Begin
if (@.option=1)
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from electron as a
where @.idpt = a.idap)
end
else if(@.option=2)
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from muon as a
where @.idpt = a.idap)
End
else
Begin
Return ( select sqrt(a.px*a.px + a.py*a.py)
from jet as a
where @.idpt = a.idap)
end
end

Msg 455, Level 16, State 2, Procedure pt, Line 22
The last statement included within a function must be a return statement.
Try putting a dummy "RETURN NULL" after the large IF block. I think the compiler isn't realizing that at least one of the 3 return statements you've got will always be called, and it's throwing a fit. The extra RETURN will never get called, but it'll at least keep the compiler happy.
|||

Try the code below.

Chris

Code Snippet

CREATE FUNCTION pt (@.idpt INT, @.option INT)

RETURNS REAL

AS

BEGIN

DECLARE @.Output REAL

IF (@.option = 1)

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM electron AS a

WHERE @.idpt = a.idap

END

ELSE

IF (@.option = 2)

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM muon AS a

WHERE @.idpt = a.idap

END

ELSE

BEGIN

SELECT @.Output = SQRT(a.px * a.px + a.py * a.py)

FROM jet AS a

WHERE @.idpt = a.idap

END

RETURN @.Output

END

ELSE IF Question

Maybe my brain is fried from staring at this too long but I have a question on how IF ELSE IF is read based on nesting.

Is it read like this:

IF

ELSE IF

ELSE IF

END

...or is it read like this?

IF

ELSE

IF

ELSE

IF

ELSE

END

Thanks,

Adamus

The second option.

And there is no END associated with an IF

|||

Thank you...

I simply forgot the BEGIN's and associated END's.

I get confused jumping from language to language. (especially late on Friday afternoon)

Adamus

ELSE CONDITION IN MDX

Hi, I'll appreciate your help to solve my develop problem.
I need to use a conditional expresion to assign a calculated member
value to cube members. I can't use the IIF function because I have 3
possible values (eg A,B,C). How can I implement the "ELSE" condition? In
pseudocode y say:
if x<=10 'A'
else if (x>10 and x<= 50) 'B'
else 'C'
Thankyou, regards,
Luis.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!You can nest the IIF statements
pseudo-code
IIF(x<10, 'A',(IIF(x<=50,'B', 'C'))
Sean
Sean Boon
SQL Server BI Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm.
"Luis Chiappara" <chappa@.montevideo.com.uy> wrote in message
news:%2393AGkVKEHA.808@.tk2msftngp13.phx.gbl...
> Hi, I'll appreciate your help to solve my develop problem.
> I need to use a conditional expresion to assign a calculated member
> value to cube members. I can't use the IIF function because I have 3
> possible values (eg A,B,C). How can I implement the "ELSE" condition? In
> pseudocode y say:
> if x<=10 'A'
> else if (x>10 and x<= 50) 'B'
> else 'C'
> Thankyou, regards,
> Luis.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!

ELSE CONDITION IN MDX

Hi, I'll appreciate your help to solve my develop problem.
I need to use a conditional expresion to assign a calculated member
value to cube members. I can't use the IIF function because I have 3
possible values (eg A,B,C). How can I implement the "ELSE" condition? In
pseudocode y say:
if x<=10 'A'
else if (x>10 and x<= 50) 'B'
else 'C'
Thankyou, regards,
Luis.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
You can nest the IIF statements
pseudo-code
IIF(x<10, 'A',(IIF(x<=50,'B', 'C'))
Sean
Sean Boon
SQL Server BI Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm.
"Luis Chiappara" <chappa@.montevideo.com.uy> wrote in message
news:%2393AGkVKEHA.808@.tk2msftngp13.phx.gbl...
> Hi, I'll appreciate your help to solve my develop problem.
> I need to use a conditional expresion to assign a calculated member
> value to cube members. I can't use the IIF function because I have 3
> possible values (eg A,B,C). How can I implement the "ELSE" condition? In
> pseudocode y say:
> if x<=10 'A'
> else if (x>10 and x<= 50) 'B'
> else 'C'
> Thankyou, regards,
> Luis.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!

else 0 in a table

Hi,
Is it possible to display 0 in a table?
i have a table that drills down into Region and say for instance when
a region has no sales for that day i would like it to show as 0 not
not show on the report in crystal i would simply use a else 0 anynow
know how to achieve this in SQL reporting?
cheersOn Jun 12, 9:52 am, blueboy <matt_me...@.hotmail.com> wrote:
> Hi,
> Is it possible to display 0 in a table?
> i have a table that drills down into Region and say for instance when
> a region has no sales for that day i would like it to show as 0 not
> not show on the report in crystal i would simply use a else 0 anynow
> know how to achieve this in SQL reporting?
> cheers
If I am understanding you correctly, you should be able to include an
'if' statement as the expression for the cell/column you want to show
zero. Something like this should work:
=iif(Fields!Sales.Value is nothing or Len(CStr(Fields!Sales.Value)) <
1, 0, Fields!Sales.Value)
This basically says: if Sales is null/blank, return 0, else return the
Sales value.
Hope this helps.
Regards,
Enrique Martinez
Sr. Software Consultant|||Many thanks will give this a shot right now :-)|||Enrique
in my detail Row i have
Region Sales
=count(Fields!Region.Value) =SUM(Fields!Sales.Value)
i have 4 regions 1-4 what i want is it to show regions 1-4 even when
no sales have been done for that day. I have tried the above but it
doesnt seem to work and just returns the same value for the 2 regions
that have done sales and nothing for the 2 that have not.
i have used different variations of below in the column expression and
the detail expression
=iif(count(Fields!ARN.Value) is nothing or Len(CStr(count(Fields!
ARN.Value))) <
1, 0, Fields!ARN.Value)
Please can you point out where im going wrong?
Many Many thanks in advance

elongated or oval pie charts

I've designed a report that has 2 pie charts side by side. The report
size is 11 x 8.5 (landscape). The pie charts reside in their own
respective tables. The problem is that when I preview or save the
report as a pdf, the pie charts are elongated or oval shaped. The
height/width settings for each table cell are the same, i.e.
proportionate. I can't think of anything else that would cause this
problem.
Has anyone seen this and if so do you have a solution?Are you using 3D pie charts with a vertical rotation? That's the only case
where you would get oval shaped pie charts. The size of the chart does not
change at runtime - you should see the same behavior in Layout mode,
Preview, and all output formats - not sure why you see this only in Preview
and PDF. Probably I'm missing something, but turning off 3D or changing the
vertical rotation to 0 degrees will results in pie charts drawn as circles.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kevin" <kevin@.wealthtouch.com> wrote in message
news:6bc68b86.0408110951.3b99ce56@.posting.google.com...
> I've designed a report that has 2 pie charts side by side. The report
> size is 11 x 8.5 (landscape). The pie charts reside in their own
> respective tables. The problem is that when I preview or save the
> report as a pdf, the pie charts are elongated or oval shaped. The
> height/width settings for each table cell are the same, i.e.
> proportionate. I can't think of anything else that would cause this
> problem.
> Has anyone seen this and if so do you have a solution?|||Robert,
Actually, what was causing the elongated pie was the fact that I had
it nested in a table with a legend that I created manually. The table
would grow or shrink depending on the amount of series showing up in
the legend. Basically, it's an Asset Allocation table with a pie chart
and a legend. The legend would get taller the more assets that were in
a portfolio. As a result, the taller the legend, the taller the table.
When the legend grew taller than the pie it caused the pie to become
elongated. So I placed the legend and pie chart in 2 seperate tables
and problem solved.
"Robert Bruckner [MSFT]" <robruc@.online.microsoft.com> wrote in message news:<OwWoQlCgEHA.2916@.TK2MSFTNGP12.phx.gbl>...
> Are you using 3D pie charts with a vertical rotation? That's the only case
> where you would get oval shaped pie charts. The size of the chart does not
> change at runtime - you should see the same behavior in Layout mode,
> Preview, and all output formats - not sure why you see this only in Preview
> and PDF. Probably I'm missing something, but turning off 3D or changing the
> vertical rotation to 0 degrees will results in pie charts drawn as circles.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
> "Kevin" <kevin@.wealthtouch.com> wrote in message
> news:6bc68b86.0408110951.3b99ce56@.posting.google.com...
> > I've designed a report that has 2 pie charts side by side. The report
> > size is 11 x 8.5 (landscape). The pie charts reside in their own
> > respective tables. The problem is that when I preview or save the
> > report as a pdf, the pie charts are elongated or oval shaped. The
> > height/width settings for each table cell are the same, i.e.
> > proportionate. I can't think of anything else that would cause this
> > problem.
> >
> > Has anyone seen this and if so do you have a solution?

Eliminating the expensive remote join

I've got a table (dexIndex) with 9 million rows in it. It's got an
identity column (index_id) and a column that's full-text indexed
(insureds_name) and a column that I want to filter on (agency_number).
When I query like this:
select * from decIndex where agency_number = '0349323' and
contains(insureds_name,'"SMITH"')
The execution plan involves a remote scan (98%) and a clustered index
seek (0%) which lead into a hash match/inner join (2%). The problem is
that the remote scan is returning 51,000 rows even though only 1 comes
out of the hash match/inner join. Thus, the query takes over a minute
on a built-to-the-hilt server.
Is there a way to have the FT engine only look through or return a
subset of the FT index, like only those rows which correspond to
agency_number = '0349323'?
I'd like to avoid adding agency_number to the FT index.
Thanks!
Regretably there is no way of doing this. One of the strategies of getting
around this problem is to partition your table according to the other
elements of you where clause. Another option is to build full text indexes
on indexed views in SQL 2005. Again the view would be of a section of your
table which would match your where conditions.
"Ryan Walberg [MCSD]" <generalpf@.nospam.yahoo.reallynospam.com> wrote in
message news:%233kbwJZJFHA.588@.TK2MSFTNGP15.phx.gbl...
> I've got a table (dexIndex) with 9 million rows in it. It's got an
> identity column (index_id) and a column that's full-text indexed
> (insureds_name) and a column that I want to filter on (agency_number).
> When I query like this:
> select * from decIndex where agency_number = '0349323' and
> contains(insureds_name,'"SMITH"')
> The execution plan involves a remote scan (98%) and a clustered index seek
> (0%) which lead into a hash match/inner join (2%). The problem is that
> the remote scan is returning 51,000 rows even though only 1 comes out of
> the hash match/inner join. Thus, the query takes over a minute on a
> built-to-the-hilt server.
> Is there a way to have the FT engine only look through or return a subset
> of the FT index, like only those rows which correspond to agency_number =
> '0349323'?
> I'd like to avoid adding agency_number to the FT index.
> Thanks!
|||Hilary Cotter wrote:
> Regretably there is no way of doing this. One of the strategies of getting
> around this problem is to partition your table according to the other
> elements of you where clause. Another option is to build full text indexes
> on indexed views in SQL 2005. Again the view would be of a section of your
> table which would match your where conditions.
I ended up clustering the index that corresponded to the unique column
for the full-text index and that sped it up a great deal.
Thanks,
Ryan

Eliminating spaces left by collapsed drilldown option

I have a report which has the hidden/drilldown feature enabled when the
report is opened. The problem, visually, that I am having, is that the next
group's spaces are visible both on the rendered web page and the preview pane
in the Visual Studio. Does anyone know of a way to "collapse" the blank
lines (like in Crystal Reports) so that the items you can drilldown to
initially appear one right after another, only spreading out to accomodate
the information under each one when that particular group heading is expanded?
Thanks!Catadmin wrote:
> I have a report which has the hidden/drilldown feature enabled when
> the report is opened. The problem, visually, that I am having, is
> that the next group's spaces are visible both on the rendered web
> page and the preview pane in the Visual Studio. Does anyone know of
> a way to "collapse" the blank lines (like in Crystal Reports) so that
> the items you can drilldown to initially appear one right after
> another, only spreading out to accomodate the information under each
> one when that particular group heading is expanded?
> Thanks!
Put the table/matrix inside a rectangle and toggle the rectangle's
visibility.
Works for me
Highly Obscure

Eliminating spaces in query output

Is there a way that I can do a select statement that will just select the text within a field. ie. If the field is 200 wide, but the actual text is only 20 characters, can I select JUST the 20 characters with nothing else? I need to do this for a huge number of rows ....

Also, I'm running the select statement via osql if that helps.When selecting the column name you can use the trim function, e.g.

select rtrim(ltrim(emp_name))
from employees

This will trim blank spaces from the left and right hand ends of the character string. If you only need to clear trailing blanks just use the rtrim function.|||RTRIM was just what I needed, thank you.

Eliminating rows from select results

Hi,

is there any way to eliminate a row from the results returned from a select statement?

I'm using some aggregate functions and doing some division and occassionally I get a divide by zero error. Is there a way I can leave the row out if the divisor is zero? Briefly, my query looks like this:

select sum(a*b)/sum(b), c
from TableA
group by c

If sum(b) is zero I want to leave that row out.

Thanks for your help!

WallaceDoes your database engine supporr the HAVING clause?

-PatP|||This is what HAVING is doing essentially:

SELECT sum_a / sum_b, c
FROM ( SELECT SUM(a*b), SUM(b), c
FROM tableA
GROUP BY c ) AS t(sum_a, sum_b, c)
WHERE sum_b <> 0|||Having should do the trick.

Thanks for your help!|||This is what HAVING is doing essentially:

SELECT sum_a / sum_b, c
FROM ( SELECT SUM(a*b), SUM(b), c
FROM tableA
GROUP BY c ) AS t(sum_a, sum_b, c)
WHERE sum_b <> 0

It is answers like the one you provided above that encourage poor software development, in this instance, database development.

There is no need to create an inline view and filter the result set by applying a where clause to the outer query, when in this instance, the exact same result can be produced by using the Having clause.

Using the having clause is the preferred method for applying criteria to aggregate results, it was designed for this exact purpose. Although the where clause can be used in certain situations to filter aggregates, it was not designed so.

To follow your example, perhaps when I provide examples I should replace all column names with some arbitrary name in an attempt to make the solution appear complex and worthy of admiration, when in fact these types of responses would only cause unnecessary confusion to the reader.|||robert, take it easy, he was showing what the HAVING clause does, not suggesting that you avoid using it|||I do apologise if my response appears rude and overtly frank. However, I consider it to be totally inappropriate to hijack a thread with the intention to be a "clever dick" and publish a random solution with complete disregard for the original question.

stolze posted an unnecessary solution to a simple question, which could have easily been answered using a HAVING clause, but did not consider it proper to provide in addition, a solution using the Having clause.

Stolze: The poster was not asking for ways to replace the functionality of the Having clause, but instead the question asked how to remove rows from a set, and in this particular instance, the condition was to be applied after the aggregation. In future, perhaps you could assist the poster with their question before trying to prove a point with respect to your technical ability.

You wouldn't speak to your fellow colleagues in such a patronising tone, so why do it here?|||You wouldn't speak to your fellow colleagues in such a patronising tone, so why do it here?so why are you doing it?|||My comments were not said with a patronising tone.|||perhaps you did not intend them to be, but that is how they came across|||I apologise to all those who may find my comments in this thread rude and patronising. My intent was just to point out that if a simple and easy to understand solution exists, then it should be provided first, before suggesting other less obvious methods.

That's all.|||select sum(a*b)/sum(b), c
from TableA
group by c

If sum(b) is zero I want to leave that row out.

As already suggested, the following is indeed what you need:SELECT sum(a*b)/sum(b), c
FROM TableA
GROUP BY c
HAVING sum(b) <> 0
Note that this also avoids zero division, i.e., the expression sum(a*b)/sum(b) is never executed when sum(b) is 0.|||I do apologise if my response appears rude and overtly frank. However, I consider it to be totally inappropriate to hijack a thread with the intention to be a "clever dick" and publish a random solution with complete disregard for the original question.

stolze posted an unnecessary solution to a simple question, which could have easily been answered using a HAVING clause, but did not consider it proper to provide in addition, a solution using the Having clause.

I just gave an explanation on how HAVING works. That's all...

Stolze: The poster was not asking for ways to replace the functionality of the Having clause, but instead the question asked how to remove rows from a set, and in this particular instance, the condition was to be applied after the aggregation. In future, perhaps you could assist the poster with their question before trying to prove a point with respect to your technical ability.

I have no idea why you are reacting so aggressively.

The OP apparently didn't know about the existence of HAVING and what it does. While the answer given by Pat was correct, I felt it is a good idea to provide more background information.

I had the chance to give some database courses at universities in the past. One thing I learned there is that students (they being university students or professionals doesn't matter in this respect) first have to learn basic concepts. Later you can use those basic concepts to explain other things. For example, once someone understands sub-selects, he/she will grasp the idea of having-clauses right away if you can show such a reformulated query. And that was my intention here as well. (I don't know where this was "patronizing".)

Also, I believe it is essential that people working with an RDBMS understand what is going on internally in order to avoid bad queries or to know what can be done for efficiently and what may not be such a great idea. For example, just look at the comments here: http://us2.php.net/odbc_num_rows All those solutions show a distinct lack of understanding of relational database systems. What's my point? It is that a good idea would be that people implement their own simple DBMS because it helps tremendously to understand how a DBMS works internally or how queries can be rephrased. A simple approach to deal with HAVING is to internally rewrite a query to the construct I posted - without loosing any functionality. (At the end of the day, the HAVING clause is very helpful but it just remains a bit syntactic sugar in SQL.) However, I'm fully aware that not everyone has the time or access to the necessary expertise to implement their own small DBMS...

I suggest that you read a few more threads in this forum and other newsgroup. You will find that questions range from very basic stuff on transactions, the relational model, etc. to the meaning of specific options or error messages and their possible causes. I found that additional explanations are a good way to answer question to the extent that the poster knows how to proceed.|||I just gave an explanation on how HAVING works. That's all...

A very handy explanation... I knew how HAVING works, and I'm sure if someone had asked me I could have shown how it's non-primitive, but I never actually broke it down like that.

I have no idea why you are reacting so aggressively.

Even if you could figure out why people say what they do, who cares? It's just words.

eliminating redundant data

edit: this came out longer than I thought, any comments about anything
here is greatly appreciated. thank you for reading

My system stores millions of records, each with fields like firstname,
lastname, email address, city, state, zip, along with any number of user
defined fields. The application allows users to define message templates
with variables. They can then select a template, and for each variable
in the template, type in a value or select a field.

The system allows you to query for messages you've sent by specifying
criteria for the variables (not the fields).

This requirement has made it difficult to normalize my datamodel at all
for speed. What I have is this:

[fieldindex]
id int PK
name nvarchar
type datatype

[recordindex]
id int PK
...

[recordvalues]
recordid int PK
fieldid int PK
value nvarchar

whenever messages are sent, I store which fields were mapped to what
variables for that deployment. So the query with a variable criteria
looks like this:

select coalesce(vm.value, rv.value)
from sentmessages sm
inner join variablemapping vm on vm.deploymentid=sm.deploymentid
left outer join recordvalues rv on
rv.recordid=sm.recordid and rv.fieldid=vm.fieldid
where coalesce(vm.value, rv.value) ...

this model works pretty well for searching messages with variable
criteria and looking up variable values for a particular message. the
big problem I have is that the recordvalues table is HUGE, 1 million
records with 50 fields each = 50 million recordvalues rows. The value,
two int columns plus the two indexes I have on the table make it into a
beast. Importing data takes forever. Querying the records (with a field
criteria) also takes longer than it should.

makes sense, the performance was largely IO bound.

I decided to try and cut into that IO. looking at a recordvalues table
with over 100 million rows in it, there were only about 3 million unique
values. so I split the recordvalues table into two tables:

[recordvalues]
recordid int PK
fieldid int PK
valueid int

[valueindex]
id int PK
value nvarchar (unique)

now, valueindex holds 3 million unique values and recordvalues
references them by id. to my suprise this shaved only 500mb off a 4gb
database!

importing didn't get any faster either, although it's no longer IO bound
it appears the cpu as the new bottleneck outweighed the IO bottleneck.
this is probably because I haven't optimized the queries for the new
tables (was hoping it wouldn't be so hard w/o the IO problem).

is there a better way to accomplish what I'm trying to do? (eliminate
the redundant data).. does SQL have built-in constructs to do stuff like
this? It seems like maybe I'm trying to duplicate functionality at a
high level that may already exist at a lower level.

IO is becoming a serious bottleneck.
the million record 50 field csv file is only 500mb. I would've thought
that after eliminating all the redundant first name, city, last name,
etc it would be less data and not 8x more!

-
Gordon

Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.comNo database vendor, version, platform... Why are you using tables as indexes
when you can use indexes as indexes? Normalizing a data model generally
slows it down. Denormalizing generally speeds things up.

"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40be960f$1_1@.Usenet.com...
> edit: this came out longer than I thought, any comments about anything
> here is greatly appreciated. thank you for reading
> My system stores millions of records, each with fields like firstname,
> lastname, email address, city, state, zip, along with any number of user
> defined fields. The application allows users to define message templates
> with variables. They can then select a template, and for each variable
> in the template, type in a value or select a field.
> The system allows you to query for messages you've sent by specifying
> criteria for the variables (not the fields).
> This requirement has made it difficult to normalize my datamodel at all
> for speed. What I have is this:
> [fieldindex]
> id int PK
> name nvarchar
> type datatype
> [recordindex]
> id int PK
> ...
> [recordvalues]
> recordid int PK
> fieldid int PK
> value nvarchar
> whenever messages are sent, I store which fields were mapped to what
> variables for that deployment. So the query with a variable criteria
> looks like this:
> select coalesce(vm.value, rv.value)
> from sentmessages sm
> inner join variablemapping vm on vm.deploymentid=sm.deploymentid
> left outer join recordvalues rv on
> rv.recordid=sm.recordid and rv.fieldid=vm.fieldid
> where coalesce(vm.value, rv.value) ...
> this model works pretty well for searching messages with variable
> criteria and looking up variable values for a particular message. the
> big problem I have is that the recordvalues table is HUGE, 1 million
> records with 50 fields each = 50 million recordvalues rows. The value,
> two int columns plus the two indexes I have on the table make it into a
> beast. Importing data takes forever. Querying the records (with a field
> criteria) also takes longer than it should.
> makes sense, the performance was largely IO bound.
> I decided to try and cut into that IO. looking at a recordvalues table
> with over 100 million rows in it, there were only about 3 million unique
> values. so I split the recordvalues table into two tables:
> [recordvalues]
> recordid int PK
> fieldid int PK
> valueid int
> [valueindex]
> id int PK
> value nvarchar (unique)
> now, valueindex holds 3 million unique values and recordvalues
> references them by id. to my suprise this shaved only 500mb off a 4gb
> database!
> importing didn't get any faster either, although it's no longer IO bound
> it appears the cpu as the new bottleneck outweighed the IO bottleneck.
> this is probably because I haven't optimized the queries for the new
> tables (was hoping it wouldn't be so hard w/o the IO problem).
> is there a better way to accomplish what I'm trying to do? (eliminate
> the redundant data).. does SQL have built-in constructs to do stuff like
> this? It seems like maybe I'm trying to duplicate functionality at a
> high level that may already exist at a lower level.
> IO is becoming a serious bottleneck.
> the million record 50 field csv file is only 500mb. I would've thought
> that after eliminating all the redundant first name, city, last name,
> etc it would be less data and not 8x more!
> -
> Gordon
> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com|||> No database vendor, version, platform... Why are you using tables as indexes
> when you can use indexes as indexes? Normalizing a data model generally
> slows it down. Denormalizing generally speeds things up.

sorry, I'm using MS SQL2000

How can I use indexes as indexes? I mean, in the example I posted, can
you give an example?

Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.com|||Well, I don't know SQL Server, but in Oracle, you create an index using the
CREATE INDEX statement. I suspect it works the same or similar in SQL
Server.

Here's an Oracle example that creates an index called
"asearch_client_id_idx" on the client_id field in a table called
"alphasearch" owned by user "alphasearch":

CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);

Now, I am about to lie a little bit for simplicity's sake, but here goes...

When a query is executed that uses client_id as a search or sort criteria,
the Oracle optimizer will decide whether or not to use the index. If it
does, it looks up the values needed in the index, and retrieves their row
ids, which in turn are essentially pointers to the location of the data in
data blocks on disc, so it goes dirctly to that location on disc and
retrieves the data out of the blocks. It does not need to go logically into
the table. Note that what I refer to as row_id in Oracle may not be the same
concept in SQL Server.

Hope you get the general idea, and you should consult your documentation
about indexes.

"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40bf5ec1$1_1@.Usenet.com...
> > No database vendor, version, platform... Why are you using tables as
indexes
> > when you can use indexes as indexes? Normalizing a data model generally
> > slows it down. Denormalizing generally speeds things up.
> sorry, I'm using MS SQL2000
> How can I use indexes as indexes? I mean, in the example I posted, can
> you give an example?
> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com|||> Well, I don't know SQL Server, but in Oracle, you create an index using the
> CREATE INDEX statement. I suspect it works the same or similar in SQL
> Server.
> Here's an Oracle example that creates an index called
> "asearch_client_id_idx" on the client_id field in a table called
> "alphasearch" owned by user "alphasearch":
> CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
> ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);

wow, what a concept ;)

I appreciate the criticism.. after all that's the intent of my original
post, however, I would prefer it to be of the constructive variety.

In my system, there are 'fields' and there are 'variables'. the user
creates the relationships between them whenever they send a message. in
order to search for messages by 'variable' values, sql needs a
relationship of its own to translate between them.

this has kept me from being able to use the obvious:
[records]
id,field1,field2,field3,...

because in a query for 'variable1', depending on the message it may have
to look at 'field3' or 'field4' for the value. this requirement is why I
have the tables I have now (recordindex, fieldindex and recordvalues).

I realize this makes for very large indexes.. and like you said, the
table itself is nothing more than a big index. This is the problem I'd
like to solve. In my original post I explained how I attempted to
eliminate redundant data, but I only eliminated 500mb (of 4gb) because
the majority of volume in this db isn't the data itself, but the index size.

Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.com|||"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40bf9c49$1_1@.Usenet.com...
> > Well, I don't know SQL Server, but in Oracle, you create an index using
the
> > CREATE INDEX statement. I suspect it works the same or similar in SQL
> > Server.
> > Here's an Oracle example that creates an index called
> > "asearch_client_id_idx" on the client_id field in a table called
> > "alphasearch" owned by user "alphasearch":
> > CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
> > ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);
> wow, what a concept ;)
> I appreciate the criticism.. after all that's the intent of my original
> post, however, I would prefer it to be of the constructive variety.

It wasn't criticism. I thought you really didn't know about RDBMS indexes,
being that you essentially created your own.

> In my system, there are 'fields' and there are 'variables'. the user
> creates the relationships between them whenever they send a message. in
> order to search for messages by 'variable' values, sql needs a
> relationship of its own to translate between them.
> this has kept me from being able to use the obvious:
> [records]
> id,field1,field2,field3,...
> because in a query for 'variable1', depending on the message it may have
> to look at 'field3' or 'field4' for the value. this requirement is why I
> have the tables I have now (recordindex, fieldindex and recordvalues).
> I realize this makes for very large indexes.. and like you said, the
> table itself is nothing more than a big index. This is the problem I'd
> like to solve. In my original post I explained how I attempted to
> eliminate redundant data, but I only eliminated 500mb (of 4gb) because
> the majority of volume in this db isn't the data itself, but the index
size.

The only similar situation I've seen like this (home-brew index constructs)
is with a document imaging system called FileNET. In that case, the vendor
actually created its own mini RDBMS to handle just these index/table
constructs. It was very fast, but, of course, proprietary.

It's hard to tell exactly what you are trying to do, though. Could you get
into the business requirements a bit? It would help me to understand what
you need to do. It get the feeling from the solution you came up with that
you are a programmer, not a DBA.

> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com

Eliminating Dynamic SQL

I am refactoring stored procedures that use dynamic sql.
The reason the store procedures use dynamic sql is because
the data that is need comes from another MS SQL database
that resides on the same server instance.
The following is a code example from a stored procedure:
declare @.theDatabase sysname;
set @.theDatabase = 'SomeDatabase';
declare @.thePrimaryKey int;
set @.thePrimaryKey = 2;
declare @.theSqlString varchar(8000);
set @.theSqlString = 'select r.Feild from ' +
@.theDatabase +
'..TableName r ' +
'where r.ID = '
+ rtrim( str( @.thePrimaryKey ) )
exec sp_executesql @.theSqlString;
The problem is that dynamic sql is EXTREMELY slow! I have
been doing some research and have found a little bit on
sp_linkedservers and OPENQUERY but I have not figured out
how to accomplish what is done above.
Can anyone give me an example of how I can use
sp_linkedservers and OPENQUERY to accomplish this query?
Would it be better, performance wise, to just continue
using dynamic sql?
Any help would be greatly appreciated.
Sincerely,
John Dickey
jpd0861@.msn.comJohn Dickey wrote:
> I am refactoring stored procedures that use dynamic sql.
> The reason the store procedures use dynamic sql is because
> the data that is need comes from another MS SQL database
> that resides on the same server instance.
> The following is a code example from a stored procedure:
> declare @.theDatabase sysname;
> set @.theDatabase = 'SomeDatabase';
> declare @.thePrimaryKey int;
> set @.thePrimaryKey = 2;
> declare @.theSqlString varchar(8000);
> set @.theSqlString = 'select r.Feild from ' +
> @.theDatabase +
> '..TableName r ' +
> 'where r.ID = '
> + rtrim( str( @.thePrimaryKey ) )
> exec sp_executesql @.theSqlString;
> The problem is that dynamic sql is EXTREMELY slow! I have
> been doing some research and have found a little bit on
> sp_linkedservers and OPENQUERY but I have not figured out
> how to accomplish what is done above.
> Can anyone give me an example of how I can use
> sp_linkedservers and OPENQUERY to accomplish this query?
> Would it be better, performance wise, to just continue
> using dynamic sql?
If you code the dynamic SQL properly, you can eliminate much of the
performance overhead. In your case, you're using sp_executesql, but are
not taking advantage of its main benefit over EXEC. That is, parameter
binding.
Instead of how it's currently coded, you could use:
set @.theSqlString = 'select r.Feild from ' +
'[' + @.theDatabase + ']' +
'..TableName r ' +
'where r.ID = @.thePrimaryKey'
Exec sp_executesql @.theSqlString, N'thePrimaryKey INT', @.thePrimaryKey
I'm not sure why it's necessary, though, given your example. You have a
hard-coded database name on the same server. If you know the database,
you can just fully-qualify the object in the SQL:
Select * from [AnotherDatabase].dbo.[MyTable]
David Gugick
Imceda Software
www.imceda.com|||Why do you want to parameterize the name of the target database?
If the name may change and/or you prefer not to hard-code it in SPs
then just create a view or views to reference the other database and
reference only the views in your SPs. That way the database name is
coded only in a few places instead of every SP.
David Portas
SQL Server MVP
--

Eliminating Dynamic SQL

I am refactoring stored procedures that use dynamic sql.
The reason the store procedures use dynamic sql is because
the data that is need comes from another MS SQL database
that resides on the same server instance.
The following is a code example from a stored procedure:
declare @.theDatabase sysname;
set @.theDatabase = 'A database that I get based on
critera at runtime.';
declare @.thePrimaryKey int;
set @.thePrimaryKey = 2;
declare @.theSqlString varchar(8000);
set @.theSqlString = 'select r.Feild from ' +
@.theDatabase +
'..TableName r ' +
'where r.ID = '
+ rtrim( str( @.thePrimaryKey ) )
exec sp_executesql @.theSqlString;
The problem is that dynamic sql is EXTREMELY slow! I have
been doing some research and have found a little bit on
sp_linkedservers and OPENQUERY but I have not figured out
how to accomplish what is done above.
Can anyone give me an example of how I can use
sp_linkedservers and OPENQUERY to accomplish this query?
Would it be better, performance wise, to just continue
using dynamic sql?
Any help would be greatly appreciated.
Sincerely,
John Dickey
jpd0861@.msn.comIf you have the same table structures in each database then you can
create a partitioned view across them and reference the partitioned
view instead. Whether this is the right solution though may depend on
just why you have the data split across multiple DBs in the first
place. It doesn't seem like a very practical architecture if it forces
you to write dynamic SQL in all your production code.
David Portas
SQL Server MVP
--|||Thank you for your reply David.
The reason that we have different databases is because the
databases are for different applications that my
application interfaces with. They happen to be Great Plains
accounting databases that I am trying to retrive data from.
Can you give me more information about the partitioned view
and how I can set that up?
John Dickey

>--Original Message--
>If you have the same table structures in each database
then you can
>create a partitioned view across them and reference the
partitioned
>view instead. Whether this is the right solution though
may depend on
>just why you have the data split across multiple DBs in
the first
>place. It doesn't seem like a very practical architecture
if it forces
>you to write dynamic SQL in all your production code.
>--
>David Portas
>SQL Server MVP
>--
>.
>|||It's all in Books Online:
http://msdn.microsoft.com/library/e...des_06_17zr.asp
David Portas
SQL Server MVP
--

Eliminating Duplicates

I have a table with 580 rows of duplicate on total of 1101 rows. The
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/defaul...kb;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.

Eliminating Duplicates

Have a pretty simple wuestion but the answer seems to be evading me:

Here's the DDL for the tables in question:

CREATE TABLE [dbo].[Office] (
[OfficeID] [int] IDENTITY (1, 1) NOT NULL ,
[ParentOfficeID] [int] NOT NULL ,
[WebSiteID] [int] NOT NULL ,
[IsDisplayOnWeb] [bit] NOT NULL ,
[IsDisplayOnAdmin] [bit] NOT NULL ,
[OfficeStatus] [char] (1) NOT NULL ,
[DisplayORD] [smallint] NOT NULL ,
[OfficeTYPE] [varchar] (10) NOT NULL ,
[OfficeNM] [varchar] (50) NOT NULL ,
[OfficeDisplayNM] [varchar] (50) NOT NULL ,
[OfficeADDR1] [varchar] (50) NOT NULL ,
[OfficeADDR2] [varchar] (50) NOT NULL ,
[OfficeCityNM] [varchar] (50) NOT NULL ,
[OfficeStateCD] [char] (2) NOT NULL ,
[OfficePostalCD] [varchar] (15) NOT NULL ,
[OfficeIMG] [varchar] (100) NOT NULL ,
[OfficeIMGPath] [varchar] (100) NOT NULL ,
[RegionID] [int] NOT NULL ,
[OfficeTourURL] [varchar] (255) NULL ,
[GeoAreaID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[OfficeBrandedURL] [varchar] (255) NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[OfficeManagement] (
[OfficeID] [int] NOT NULL ,
[PersonnelID] [int] NOT NULL ,
[JobTitleID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[SeqNBR] [int] NOT NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[OfficeMls] (
[OfficeID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[OfficeMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[Personnel] (
[PersonnelID] [int] IDENTITY (1, 1) NOT NULL ,
[PersonnelDisplayName] [varchar] (100) NOT NULL ,
[FirstNM] [varchar] (50) NOT NULL ,
[PreferredFirstNM] [varchar] (50) NOT NULL ,
[MiddleNM] [varchar] (50) NOT NULL ,
[LastNM] [varchar] (50) NOT NULL ,
[PersonalTaxID] [varchar] (9) NOT NULL ,
[HireDT] [datetime] NOT NULL ,
[TermDT] [datetime] NOT NULL ,
[HomePhoneNBR] [varchar] (15) NULL ,
[HomeADDR1] [varchar] (50) NOT NULL ,
[HomeADDR2] [varchar] (50) NOT NULL ,
[HomeCityNM] [varchar] (50) NOT NULL ,
[HomeStateCD] [char] (2) NOT NULL ,
[HomePostalCD] [varchar] (15) NOT NULL ,
[PersonnelLangCSV] [varchar] (500) NOT NULL ,
[PersonnelSlogan] [varchar] (500) NOT NULL ,
[BGColor] [varchar] (50) NOT NULL ,
[IsEAgent] [bit] NOT NULL ,
[IsArchAgent] [bit] NOT NULL ,
[IsOptOut] [bit] NOT NULL ,
[IsDispOnlyPrefFirstNM] [bit] NOT NULL ,
[IsHideMyListingLink] [bit] NOT NULL ,
[IsPreviewsSpecialist] [bit] NOT NULL ,
[AudioFileNM] [varchar] (100) NULL ,
[iProviderID] [int] NOT NULL ,
[DRENumber] [varchar] (10) NOT NULL ,
[AgentBrandedURL] [varchar] (255) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[IsDisplayAwards] [bit] NOT NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[PersonnelMLS] (
[PersonnelID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[AgentMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Office] ADD
CONSTRAINT [FK_Office_OfficeProfile] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[OfficeProfile] (
[OfficeID]
) NOT FOR REPLICATION
GO

alter table [dbo].[Office] nocheck constraint [FK_Office_OfficeProfile]
GO

ALTER TABLE [dbo].[OfficeManagement] ADD
CONSTRAINT [FK_OfficeManagement_LookupJobTitle] FOREIGN KEY
(
[JobTitleID]
) REFERENCES [dbo].[LookupJobTitle] (
[JobTitleID]
),
CONSTRAINT [FK_OfficeManagement_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_OfficeManagement_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) ON DELETE CASCADE
GO

alter table [dbo].[OfficeManagement] nocheck constraint [FK_OfficeManagement_Office]
GO

ALTER TABLE [dbo].[OfficeMls] ADD
CONSTRAINT [FK_OfficeMls_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION
GO

alter table [dbo].[OfficeMls] nocheck constraint [FK_OfficeMls_Office]
GO

ALTER TABLE [dbo].[PersonnelMLS] ADD
CONSTRAINT [FK_PersonnelMLS_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) NOT FOR REPLICATION
GO

alter table [dbo].[PersonnelMLS] nocheck constraint [FK_PersonnelMLS_Personnel]
GO

Here's the query I'm having trouble with:

SELECT distinct Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID AS OfficeBoard,
PersonnelMLS.SourceID AS AgentBoard
FROM Personnel INNER JOIN
OfficeManagement ON
Personnel.PersonnelID = OfficeManagement.PersonnelID
INNER JOIN
Office ON OfficeManagement.OfficeID = Office.OfficeID
INNER JOIN
OfficeMls ON Office.OfficeID = OfficeMls.OfficeID
INNER JOIN
PersonnelMLS ON Personnel.PersonnelID = PersonnelMLS.PersonnelID
where officemls.sourceid <> personnelmls.sourceid
and office.officenm not like ('%admin%')
group by PersonnelMLS.SourceID,
Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID
order by office.officenm

What I'm trying to retrieve are those agents who have source id's that are not in the Office's domain of valid source id's. Here's a small portion of the results:

PersonnelID FirstNM LastNM OfficeNM OfficeID OfficeBoard AgentBoard
---- ---------------- ---------------- ---------------- ---- ---- ----
18205 Margaret Peggy Quattro Aventura North 650 906 908
18205 Margaret Peggy Quattro Aventura North 650 918 908
15503 Susan Jordan Blackburn Point 889 920 909
15503 Susan Jordan Blackburn Point 889 921 909
15503 Susan Jordan Blackburn Point 889 921 920
15279 Sandra Humphrey Boca Beach North 890 917 906
15279 Sandra Humphrey Boca Beach North 890 906 917
15279 Sandra Humphrey Boca Beaches 626 917 906
15279 Sandra Humphrey Boca Beaches 626 906 917
13532 Michael Demcho Boca Downtown 735 906 917
14133 Maria Ford Boca Downtown 735 906 917
19126 Michael Silverman Boca Glades Road 736 917 906
18920 Beth Schwartz Boca Glades Road 736 906 917

If you take a look at Sandra Humphries, you'll see she's out of office 626. Office 626 is associated with source id's 907 and 916. Sandra Humphries is also associated with those two source id's , but she shows up in the results.

I know this was AWFULLY long winded, but just wanted to make sure made myself as clear as possible.

Any help would be greatly appreciated.

Thanks in advance!Could you edit the post and wrap [ code] [ /code ] tags around it?

Just reomve the spaces in the tags|||You're right...that is a LOT better!! Thanks for the tip...|||Do the same with the data...

Why are you doing a GROUP BY? To get DISTINCT data?

EDIT: And can you post the DDL for the tables|||...and I guess I really don't need the GROUP BY...|||As far as i can see .. the rows are not exactly duplicates ... OfficeBoard id seems to be differing ... a joi n problem .. maybe ??|||didnt read the post completely .. my bad !!!

Didnt get what exactly you were trying to accomplish ...

How do you decide that Sandra Humphries is outta office 626 ?|||Office 626 is the office she's associated with - it's set up in the main table. Been working on the join and can't seem to work out the bugs on this one...|||you could try replacing
"where officemls.sourceid <> personnelmls.sourceid"
with
"where personnelmls.sourceid not in
(select sourceid from officemls omls join office o on omls.officeid = o.officeid
where officemanagement.officeid = omls.officeid)"
This will give you duplicates because of the "distinct" but only the wrong personnelmls.sourceid should be selected.

Eliminating Duplicates

I have a table with 580 rows of duplicate on total of 1101 rows. The
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.
Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/default...b;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.

Eliminating Duplicates

I have a table with a field called PID. This field can have duplicate
values in it. I have 2 other fields called CAU and SUF. I may have 3 rows
with the same PID and same CAU, but the SUF field when concatenated with the
2 aforementioned fields make the row unique.
I want to know how to select 1 of the 3 rows but not the other 2. In
otherwords I need 1 row for each SUF. Hopes this makes sense and I would
appreciate any help.Try the following query. Work from the inner select out.
CREATE TABLE OneSUF
(
PID INT NOT NULL,
CAU INT NOT NULL,
SUF INT NOT NULL,
Description VARCHAR(255) NOT NULL
)
INSERT INTO OneSUF VALUES (1, 2, 3, '1, 2, 3')
INSERT INTO OneSUF VALUES (1, 2, 4, '1, 2, 4')
INSERT INTO OneSUF VALUES (1, 2, 5, '1, 2, 5')
INSERT INTO OneSUF VALUES (1, 3, 1, '1, 3, 1')
INSERT INTO OneSUF VALUES (1, 3, 2, '1, 3, 2')
INSERT INTO OneSUF VALUES (1, 3, 3, '1, 3, 3')
INSERT INTO OneSUF VALUES (1, 4, 3, '1, 4, 3')
INSERT INTO OneSUF VALUES (1, 4, 4, '1, 4, 4')
INSERT INTO OneSUF VALUES (1, 4, 5, '1, 4, 5')
INSERT INTO OneSUF VALUES (1, 4, 6, '1, 4, 6')
SELECT SUF3.SUF, SUF3.CAU, SUF3.PID, SUF3.Description
FROM OneSUF AS SUF3 INNER JOIN
(
SELECT SUF1.SUF, SUF1.CAU, MIN(SUF1.PID) AS PID FROM OneSUF AS SUF1 INNER
JOIN
(SELECT SUF, MIN(CAU) AS CAU FROM OneSUF GROUP BY SUF) AS SUF2
ON SUF1.SUF = SUF2.SUF AND SUF1.CAU = SUF2.CAU
GROUP BY SUF1.SUF, SUF1.CAU) AS SUF4
ON SUF3.SUF = SUF4.SUF AND SUF3.CAU = SUF4.CAU AND SUF3.PID = SUF4.PID
ORDER BY 1,2,3
--
Barry McAuslin
Look inside your SQL Server files with SQL File Explorer.
Go to http://www.sqlfe.com for more information.
"Jeff Humphrey" <jeffhumphrey@.cox-internet.com> wrote in message
news:%23OzmAYOmDHA.2416@.TK2MSFTNGP10.phx.gbl...
> I have a table with a field called PID. This field can have duplicate
> values in it. I have 2 other fields called CAU and SUF. I may have 3
rows
> with the same PID and same CAU, but the SUF field when concatenated with
the
> 2 aforementioned fields make the row unique.
> I want to know how to select 1 of the 3 rows but not the other 2. In
> otherwords I need 1 row for each SUF. Hopes this makes sense and I would
> appreciate any help.
>

Eliminating Duplicates

I have a table with 580 rows of duplicate on total of 1101 rows. The
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/default.aspx?scid=kb;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.

Eliminating Duplicated

I have 3 tables Employee (empno,name,city) , Emp-Pl (empno,noofleaves) , and
Emp-Cl (empno,noofleaves). Now i require the report in the following format
Sample Data
employee
1 ravi kumar rajahmundry
2 ravindra kakinada
emp-pl emp-cl
empno noofleave empno noofleave
1 2 1 4
1 3 1 2
2 4 1 3
2 1 2 1
2 1
EMPNO NAME CL PL
1 ravikumar 4 2
2 3
3
2 ravindra 1 4
1
1
"venkataramarao" <venkataramarao@.discussions.microsoft.com> wrote in message
news:AC5F930E-5EE5-41B2-AB7B-C809082EF5C7@.microsoft.com...
>I have 3 tables Employee (empno,name,city) , Emp-Pl (empno,noofleaves) ,
>and
> Emp-Cl (empno,noofleaves). Now i require the report in the following
> format
> Sample Data
> employee
> 1 ravi kumar rajahmundry
> 2 ravindra kakinada
> emp-pl emp-cl
> empno noofleave empno noofleave
> 1 2 1 4
> 1 3 1 2
> 2 4 1 3
> 2 1 2 1
> 2 1
> EMPNO NAME CL PL
> 1 ravikumar 4 2
> 2 3
> 3
> 2 ravindra 1 4
> 1
> 1
I don't understand your requirements. Why does data from the CL table appear
in both the CL and PL columns? What determines this output?
Please post DDL and post sample data as INSERT statements rather than
sketches of tables.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx

eliminating duplicate records from a table

Hi All

I am having problem in selecting the data from oracle. The problem is there are multiple enteries per customer, I want to select the latest updated value. I know the customer ids which has multiple entries corresssponding to them.

The columns are Cust_ID, Update_Date, Cust_Name,Cust_address.

Select ADDRESS_LINE1, LAST_UPDATE_DATE,ADDRESS_LINE2, ADDRESS_LINE3, CITY, STATE, ZIP,COUNTRY

from Customer_Table s where s.cust_id in (101,102,103,104,105,106)

Group By ADDRESS_LINE1, LAST_UPDATE_DATE,ADDRESS_LINE2, ADDRESS_LINE3, CITY, STATE, ZIP,COUNTRY

having .........(no idea)

I know this is not correct as I am still getting multiple records.

Any help will be useful.

Well, we don't really do oracle around these parts, but...

You should be able to say

count(*) > 1

in the having clause to get groups where there are > 1 rows in there.

|||

I guess that the later means the one with latest LAST_UPDATE. I do not know if this works for "oracle", but it does for SQL Server.

select *

from Customer_Table s

where s.cust_id in (101,102,103,104,105,106)

and last_update = (

select max(a.last_update)

from Customer_Table as a

where a.cust_id = s.cust_id

)

go

AMB

|||

Thanks for replying..I am sorry fro posting oracle query here..

I found the solution.....Thanks a lot...

Eliminating Duplicate Records

Hi

How can i eliminate duplicate records from a single table, when i use a query that links multipla tables together?

There is a surrogate key TABLE1 and i just can't seem to wrap my head around the layout.

TABLE1

ID | memQuestionaireID | QuestscoreID | DateOfAssessment | etc.

TABLE2
memQuestionaireID | MemberID | Answer1 | Answer2 | etc.

TABLE3

MemberID | Name | Surname | IDNo | etc.

t1.memQuestionaireID = t2.memQuestionaireID

t2.MemberID = t3.MemberID

That's how the tables link, obvious enough. How can i delete duplicate records from TABLE1 where MemberID occurs more than once and the t1.DateOfAssessment = MAX(DateOfAssessment)

The ID is unique, but there are duplicate memberID's with different date of assessments, i want to take the last used date of assessment and keep only that one, and remove the rest from the table?

I'm sure the query is simple, i'm just too stupid to see how to solve it!!

Any help would be greatly appreciated.

I'm not very clued up on how to remove records or update records, Please help.

Kind Regards
Carel Greaves

See if this does what you need...

Code Snippet

DELETE t1

FROM Table1 t1

INNER JOIN Table2

on t1.memQuestionaireID = t2.memQuestionaireID

INNER JOIN

(

SELECT t2b.MemberID, max(t1b.DateOfAssessment) as DateOfAssessment

FROM table1 t1b

INNER JOIN table2 t2b

ON t1b.memQuestionaireID = t2b.memQuestionaireID

AND t2b.MemberID in

(

SELECT MemberID

FROM Table3

GROUP BY MemberID

HAVING count(*)>1

)

GROUP BY t2b.MemberID

) as list

ON t1.DateOfAssessment < list.DateOfAssessment

and t2.MemberID = list.MemberID

To view what will be delete w/o actually deleting, just change the first line from DELETE t1 to SELECT t1.*

|||

I'm a bit confused Carel,

Call me dense today, but it seems like the MemberID 'should' be in Table1 since it is the Member that has a memQuestionaireID, has a QuestScoreID and a DateOfAssessment. Then for Table2, it is the Questionaire that has a set of answers, etc. Please help me out here...

It seems like your Table1 has a one-to-many relationship with Table2. I'm assuming that a Member may have multiple instances of the same questionnaire (re-takes, etc.)

Is one questionnaire (MemQuestionnaireID) with a QuestScoreID and a DateOfAssessment directly linked to one or more set(s) of answers in Table2?

Is the QuestScoreID changed upon each re-take? (Shouldn't the score be directly related to a set of answers?)

I ask this because it seems from the design you presented that perhaps you want to also delete rows from Table2...

|||

I know the database structure is a bit stupid. Not my design!

I am only trying to remove duplicate entries in the QuestScore table.

Problem:

A member fills in a Questionaire - MemQuestionaires

The Querstionaire's Score is placed in the QuestScore Table.

There are duplicate entries in the QuestScore table with regards to a questionaireID from the memQuestionaires Table

If a duplicate exists, then the members can't log into there profiles anymore, so i need to remove the duplicates so that the members can access their profiles again.

I'm quite new to all of this, so any help would be greatly appreciated

Kind Regards

Carel Greaves.

|||

You gave us Table1, Table2, Table3...I'm guessing Table1 is QuestScore?

Did my code above work? If not, what was not quite right?

Are there multiple entries just in Table1, or is it Table3?

Please clarify and maybe post some sample data...

|||

Okay, so i'm a bito f a moron :-) he he

Here is the code i used to find the duplicates

SELECT me.MemberID, /*me.QuestionaireID,*/ q.memQuestionaireID, q.*, me.*

FROM questscore q, memQuestionaires m, Members me

WHERE q.memquestionaireID = m.memquestionaireID

AND m.MemberiD = me.memberid

AND me.ClientID = 124

ANd m.questionaireid = 85

AND q.sectionid >= 1000000

ORDER BY me.memberID

Here is my results:

MemberID memQuestionaireID QuestScoreID AnswerID AnswerValue DateCompleted 10005772 7615 273308 0 -500 50:59.7 10005789 11124 267771 0 -500 25:16.0 10005798 11129 259150 0 -500 44:41.1 10005798 11129 261032 0 -500 01:21.3 10005807 8214 237464 0 -500 59:18.1 10005819 11142 293317 0 -500 43:09.4

i have marked the duplicate values in red that i want to eliminate.

I need to eliminate only the record with the oldest date, and keep the newer one.

Hopefully when i've gone for my SQL course then i will only be half a moron :-) he he

Thanks for helping me out guys.

Kind Regards

Carel Greaves

|||

Wow, talk about a 'bolixed' up database...

I'm still confused, in your query above, you have a WHERE clause that indicates the results should be filtered down to QuestionaireID = 85, but your duplicate records are for memQuestionaireID 11129. What is the difference between QuestionaireID and memQuestionaireID? How did that happen?

Also, the DateCompleted is obviously not a date, but a number of hours. Is that the number of hours since some event? How are the hours determined?

Which row is the 'newer one'?

It would be very helpful, and keep us from wasting our time, if you would post the DDL for each of the tables, as well as some sample data for each table (in the form of INSERT statements.)

As you seen, you have folks sincerely attempting to help you, and to this point wasting our time because we just don't have the 'full' picture. You've inherited a very 'odd' database, and you need to help us so we can better help you.

|||

Here is sample data in the Tables.

Member Table

med_aid MemberID ClientID Name Surname Username Password NULL 10000000 1 NiftyName NiftySurname his test NULL 10000001 5 NiftyTest2 Surname2 nifty@.ybo.oa JBCYWO NULL 10000002 5 KeithTest WilsonTest willi willi

memQuestionaires Table

memQuestionaireID MemberID QuestionaireID LastPage Paused Complete MaxPages Sent LastSaved InitMail ReceiveMail UpdateDate 871 10000000 85 1 0 0 1 1 0 1 1 2007/02/2506:23 872 10000001 85 1 0 0 1 1 0 1 1 2007/02/2506:23 873 10000002 85 1 0 0 1 1 0 1 1 2007/02/2506:23

QuestScore Table

QuestScoreID memQuestionaireID AnswerID AnswerValue SectionID Page QuestType AnswerText AnswerShort Updatedate DefaultValue DateCompleted 4641 871 0 -9 361 1 9 NULL NULL

2007/02/0722:31

4642 872 0 -5 362 1 6 NULL NULL

2007/02/0722:31

4643 873 0 0 345 1 2 NULL NULL

2007/02/0722:31

When i executed this statement, i only used these three tables.

SELECT me.MemberID, /*me.QuestionaireID,*/ q.memQuestionaireID, q.*, me.*
FROM questscore q, memQuestionaires m, Members me
WHERE q.memquestionaireID = m.memquestionaireID
AND m.MemberiD = me.memberid
AND me.ClientID = 124
ANd m.questionaireid = 85
AND q.sectionid >= 1000000
ORDER BY me.memberID

I am only trying to remove the duplicate records for where ClientID = 124

AND QuestionaireID = 85

as for the 1000000, there it is just to narrow down the search for myself, there are a lot of duplicates.

There are a lot of members -> Members Table

Each member can fill in a number of questionaires -> QuestionaireID on memQuestionaires Table in this case 85

And all the questionaire's data is stored in the QuestScore Table which is linked to the memQuestionaires table using the memQuestionairesID.

There seems to be duplicate values in the QuestScore Table referring to the members.

The date completed is an actual DATETIME field in the database, i used excel to transfer the data to here. The newer date will be a actual date with the time in the database i.e. 2007/05/23 02:05:11

All the fields get inserted into the database via a application which i haven't seen.

I'm new to the company and am trying to figure a lot of it out for myself too.

If there is any more information you need, please don't hesitate to ask.

|||try this one,

DECLARE @.QuestScore TABLE (
QuestScoreID int
, memQuestionaireID int
, DateCompleted smalldatetime
)

DECLARE @.MemQuestionaire TABLE (
MemQuestionaireID int
, MemberID int
, QuestionaireID int
, UpdateDate smalldatetime
)

INSERT
INTO @.QuestScore
SELECT 1, 1, dateadd(day,-1, getdate()) UNION ALL
SELECT 2, 2, dateadd(day,-1, getdate()) UNION ALL
SELECT 3, 3, dateadd(day,-3, getdate())

INSERT
INTO @.MemQuestionaire
SELECT 1, 1, 4, dateadd(day,-1, getdate()) UNION ALL
SELECT 2, 1, 4, dateadd(day,-1, getdate()) UNION ALL
SELECT 3, 2, 5, dateadd(day,-3, getdate())

select *
from @.memquestionaire

select *
from @.questscore

declare @.tobedeleted table( -- create a table var for the data to be deleted
memquestionaireid int
, datecompleted smalldatetime
)

insert
into @.tobedeleted -- the memquestionaireid and datecompleted to be deleted
select MIN(qq.memquestionaireid) as memquestionaireid -- just in case if they have the same date, get the lowest id
, mq.datecompleted
from @.questscore qq inner join
(
select m.memberid
, m.questionaireid
, MIN(q.datecompleted) as datecompleted -- delete the lowest date
from @.memquestionaire m inner join
@.questscore q on m.memquestionaireid = q.memquestionaireid
group by
m.memberid
, m.questionaireid
having count(m.memberid) > 1 -- member having more than 1 quest score
) mq on qq.datecompleted = mq.datecompleted
group by
mq.datecompleted

delete @.questscore -- delete in questscore
from @.questscore q inner join
@.tobedeleted dq on q.memquestionaireid = dq.memquestionaireid
and q.datecompleted = dq.datecompleted

delete @.memquestionaire -- delete in memquestionaire
from @.memquestionaire m inner join
@.tobedeleted dq on m.memquestionaireid = dq.memquestionaireid

select *
from @.questscore

select *
from @.memquestionaire|||

Thanks, that's awesome, that's why i come to you guys for help.

Another quick question, that i think it might be easier to do, i just can't do it. (I just thought of it now)

If i added memQuestionaireID and SectionID together to create a unique field, and then filter out the duplicates by taking out the old dates within the duplicates it might be easier, but how would i go about accomplishing this?

I don't know how to compare the dates against itself to get keep the new date and get rid of the old date?

sorry for all of this i'm just trying to learn from all the things myself?

|||

Thanks, that was a start.

Here is an example of the 'best' form for DDL and sample data. In this fashion, each person that wants to try and help you can just run this code and have 'your' problem to work with. Without this, folks are turned away because it takes so much time and effort to duplicate your situation -and helps to keep us from running off on tangents that don't really help you.

Now if you would only add to the sample data so that examples of the duplicate records you want to find and delete are represented, we 'should' be able to help you.

Code Snippet


DECLARE @.Members table
( med_aid int,
MemberID int,
ClientID int,
Name varchar(20),
Surname varchar(20),
Username varchar(100),
Password varchar(20)
)


INSERT INTO @.Members VALUES ( NULL, 10000000, 1, 'NiftyName', 'NiftySurname', 'his', 'test' )
INSERT INTO @.Members VALUES ( NULL, 10000001, 5, 'NiftyTest2', 'Surname2', 'nifty@.ybo.oa', 'JBCYWO' )
INSERT INTO @.Members VALUES ( NULL, 10000002, 5, 'KeithTest', 'WilsonTest', 'willi', 'willi' )


DECLARE @.memQuestionaires table
( memQuestionaireID int,
MemberID int,
QuestionaireID int,
LastPage int,
Paused int,
Complete int,
MaxPages int,
Sent int,
LastSaved int,
InitMail int,
ReceiveMail int,
UpdateDate datetime
)


INSERT INTO @.memQuestionaires VALUES ( 871, 10000000, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )
INSERT INTO @.memQuestionaires VALUES ( 872, 10000001, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )
INSERT INTO @.memQuestionaires VALUES ( 873, 10000002, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )


DECLARE @.QuestScore table
( QuestScoreID int,
memQuestionaireID int,
AnswerID int,
AnswerValue int,
SectionID int,
Page int,
QuestType int,
AnswerText int,
AnswerShort int,
Updatedate int,
DefaultValue int,
DateCompleted datetime
)


INSERT INTO @.QuestScore VALUES ( 4641, 871, 0, -9, 361, 1, 9, '', '', NULL, NULL, '2007/02/07 22:31' )
INSERT INTO @.QuestScore VALUES ( 4642, 872, 0, -5, 362, 1, 6, '', '', NULL, NULL, '2007/02/07 22:31' )
INSERT INTO @.QuestScore VALUES ( 4643, 873, 0, 0, 345, 1, 2, '', '', NULL, NULL, '2007/02/07 22:31' )

|||very clean example arnie.. i wonder how your desk looks like |||

Sorry i'm sending you guys on a wild goose chase like this, and thanks a lot Arnie, i'd buy you a case of beer if i knew where you were.

I'm really learning a lot from this!

I just thought that it might be easier, if i had to add the memQuestionaireID and SectionID together in the QuestScore table to get a single unique value and remove the duplicates based on the newer unique value according to the Date, i.e. Only removing the old dates.

Sorry about this, i only saw it now.

That way i'm only using a single table.

|||

Thanks,

And my desk is under the stuff somewhere, or so it was a few months ago...

And UPS serves Portland, OR! But it would be much more enjoyable to share, so if you can wrangle it, start lobbying management to let you come to the PASS (Professional Association for SQL Server) Conference in September. (http://sqlpass.org/)

|||

Cummon guys, i'm a newbie at SQL Server in South Africa and it's late at night and my CEO's say i can't go home until i get this problem sorted out.

Please could you guys help me out quick?