Sunday, February 26, 2012
Eliminating the expensive remote join
identity column (index_id) and a column that's full-text indexed
(insureds_name) and a column that I want to filter on (agency_number).
When I query like this:
select * from decIndex where agency_number = '0349323' and
contains(insureds_name,'"SMITH"')
The execution plan involves a remote scan (98%) and a clustered index
seek (0%) which lead into a hash match/inner join (2%). The problem is
that the remote scan is returning 51,000 rows even though only 1 comes
out of the hash match/inner join. Thus, the query takes over a minute
on a built-to-the-hilt server.
Is there a way to have the FT engine only look through or return a
subset of the FT index, like only those rows which correspond to
agency_number = '0349323'?
I'd like to avoid adding agency_number to the FT index.
Thanks!
Regretably there is no way of doing this. One of the strategies of getting
around this problem is to partition your table according to the other
elements of you where clause. Another option is to build full text indexes
on indexed views in SQL 2005. Again the view would be of a section of your
table which would match your where conditions.
"Ryan Walberg [MCSD]" <generalpf@.nospam.yahoo.reallynospam.com> wrote in
message news:%233kbwJZJFHA.588@.TK2MSFTNGP15.phx.gbl...
> I've got a table (dexIndex) with 9 million rows in it. It's got an
> identity column (index_id) and a column that's full-text indexed
> (insureds_name) and a column that I want to filter on (agency_number).
> When I query like this:
> select * from decIndex where agency_number = '0349323' and
> contains(insureds_name,'"SMITH"')
> The execution plan involves a remote scan (98%) and a clustered index seek
> (0%) which lead into a hash match/inner join (2%). The problem is that
> the remote scan is returning 51,000 rows even though only 1 comes out of
> the hash match/inner join. Thus, the query takes over a minute on a
> built-to-the-hilt server.
> Is there a way to have the FT engine only look through or return a subset
> of the FT index, like only those rows which correspond to agency_number =
> '0349323'?
> I'd like to avoid adding agency_number to the FT index.
> Thanks!
|||Hilary Cotter wrote:
> Regretably there is no way of doing this. One of the strategies of getting
> around this problem is to partition your table according to the other
> elements of you where clause. Another option is to build full text indexes
> on indexed views in SQL 2005. Again the view would be of a section of your
> table which would match your where conditions.
I ended up clustering the index that corresponded to the unique column
for the full-text index and that sped it up a great deal.
Thanks,
Ryan
Eliminating spaces left by collapsed drilldown option
report is opened. The problem, visually, that I am having, is that the next
group's spaces are visible both on the rendered web page and the preview pane
in the Visual Studio. Does anyone know of a way to "collapse" the blank
lines (like in Crystal Reports) so that the items you can drilldown to
initially appear one right after another, only spreading out to accomodate
the information under each one when that particular group heading is expanded?
Thanks!Catadmin wrote:
> I have a report which has the hidden/drilldown feature enabled when
> the report is opened. The problem, visually, that I am having, is
> that the next group's spaces are visible both on the rendered web
> page and the preview pane in the Visual Studio. Does anyone know of
> a way to "collapse" the blank lines (like in Crystal Reports) so that
> the items you can drilldown to initially appear one right after
> another, only spreading out to accomodate the information under each
> one when that particular group heading is expanded?
> Thanks!
Put the table/matrix inside a rectangle and toggle the rectangle's
visibility.
Works for me
Highly Obscure
Eliminating spaces in query output
Also, I'm running the select statement via osql if that helps.When selecting the column name you can use the trim function, e.g.
select rtrim(ltrim(emp_name))
from employees
This will trim blank spaces from the left and right hand ends of the character string. If you only need to clear trailing blanks just use the rtrim function.|||RTRIM was just what I needed, thank you.
Eliminating rows from select results
is there any way to eliminate a row from the results returned from a select statement?
I'm using some aggregate functions and doing some division and occassionally I get a divide by zero error. Is there a way I can leave the row out if the divisor is zero? Briefly, my query looks like this:
select sum(a*b)/sum(b), c
from TableA
group by c
If sum(b) is zero I want to leave that row out.
Thanks for your help!
WallaceDoes your database engine supporr the HAVING clause?
-PatP|||This is what HAVING is doing essentially:
SELECT sum_a / sum_b, c
FROM ( SELECT SUM(a*b), SUM(b), c
FROM tableA
GROUP BY c ) AS t(sum_a, sum_b, c)
WHERE sum_b <> 0|||Having should do the trick.
Thanks for your help!|||This is what HAVING is doing essentially:
SELECT sum_a / sum_b, c
FROM ( SELECT SUM(a*b), SUM(b), c
FROM tableA
GROUP BY c ) AS t(sum_a, sum_b, c)
WHERE sum_b <> 0
It is answers like the one you provided above that encourage poor software development, in this instance, database development.
There is no need to create an inline view and filter the result set by applying a where clause to the outer query, when in this instance, the exact same result can be produced by using the Having clause.
Using the having clause is the preferred method for applying criteria to aggregate results, it was designed for this exact purpose. Although the where clause can be used in certain situations to filter aggregates, it was not designed so.
To follow your example, perhaps when I provide examples I should replace all column names with some arbitrary name in an attempt to make the solution appear complex and worthy of admiration, when in fact these types of responses would only cause unnecessary confusion to the reader.|||robert, take it easy, he was showing what the HAVING clause does, not suggesting that you avoid using it|||I do apologise if my response appears rude and overtly frank. However, I consider it to be totally inappropriate to hijack a thread with the intention to be a "clever dick" and publish a random solution with complete disregard for the original question.
stolze posted an unnecessary solution to a simple question, which could have easily been answered using a HAVING clause, but did not consider it proper to provide in addition, a solution using the Having clause.
Stolze: The poster was not asking for ways to replace the functionality of the Having clause, but instead the question asked how to remove rows from a set, and in this particular instance, the condition was to be applied after the aggregation. In future, perhaps you could assist the poster with their question before trying to prove a point with respect to your technical ability.
You wouldn't speak to your fellow colleagues in such a patronising tone, so why do it here?|||You wouldn't speak to your fellow colleagues in such a patronising tone, so why do it here?so why are you doing it?|||My comments were not said with a patronising tone.|||perhaps you did not intend them to be, but that is how they came across|||I apologise to all those who may find my comments in this thread rude and patronising. My intent was just to point out that if a simple and easy to understand solution exists, then it should be provided first, before suggesting other less obvious methods.
That's all.|||select sum(a*b)/sum(b), c
from TableA
group by c
If sum(b) is zero I want to leave that row out.
As already suggested, the following is indeed what you need:SELECT sum(a*b)/sum(b), c
FROM TableA
GROUP BY c
HAVING sum(b) <> 0
Note that this also avoids zero division, i.e., the expression sum(a*b)/sum(b) is never executed when sum(b) is 0.|||I do apologise if my response appears rude and overtly frank. However, I consider it to be totally inappropriate to hijack a thread with the intention to be a "clever dick" and publish a random solution with complete disregard for the original question.
stolze posted an unnecessary solution to a simple question, which could have easily been answered using a HAVING clause, but did not consider it proper to provide in addition, a solution using the Having clause.
I just gave an explanation on how HAVING works. That's all...
Stolze: The poster was not asking for ways to replace the functionality of the Having clause, but instead the question asked how to remove rows from a set, and in this particular instance, the condition was to be applied after the aggregation. In future, perhaps you could assist the poster with their question before trying to prove a point with respect to your technical ability.
I have no idea why you are reacting so aggressively.
The OP apparently didn't know about the existence of HAVING and what it does. While the answer given by Pat was correct, I felt it is a good idea to provide more background information.
I had the chance to give some database courses at universities in the past. One thing I learned there is that students (they being university students or professionals doesn't matter in this respect) first have to learn basic concepts. Later you can use those basic concepts to explain other things. For example, once someone understands sub-selects, he/she will grasp the idea of having-clauses right away if you can show such a reformulated query. And that was my intention here as well. (I don't know where this was "patronizing".)
Also, I believe it is essential that people working with an RDBMS understand what is going on internally in order to avoid bad queries or to know what can be done for efficiently and what may not be such a great idea. For example, just look at the comments here: http://us2.php.net/odbc_num_rows All those solutions show a distinct lack of understanding of relational database systems. What's my point? It is that a good idea would be that people implement their own simple DBMS because it helps tremendously to understand how a DBMS works internally or how queries can be rephrased. A simple approach to deal with HAVING is to internally rewrite a query to the construct I posted - without loosing any functionality. (At the end of the day, the HAVING clause is very helpful but it just remains a bit syntactic sugar in SQL.) However, I'm fully aware that not everyone has the time or access to the necessary expertise to implement their own small DBMS...
I suggest that you read a few more threads in this forum and other newsgroup. You will find that questions range from very basic stuff on transactions, the relational model, etc. to the meaning of specific options or error messages and their possible causes. I found that additional explanations are a good way to answer question to the extent that the poster knows how to proceed.|||I just gave an explanation on how HAVING works. That's all...
A very handy explanation... I knew how HAVING works, and I'm sure if someone had asked me I could have shown how it's non-primitive, but I never actually broke it down like that.
I have no idea why you are reacting so aggressively.
Even if you could figure out why people say what they do, who cares? It's just words.
eliminating redundant data
here is greatly appreciated. thank you for reading
My system stores millions of records, each with fields like firstname,
lastname, email address, city, state, zip, along with any number of user
defined fields. The application allows users to define message templates
with variables. They can then select a template, and for each variable
in the template, type in a value or select a field.
The system allows you to query for messages you've sent by specifying
criteria for the variables (not the fields).
This requirement has made it difficult to normalize my datamodel at all
for speed. What I have is this:
[fieldindex]
id int PK
name nvarchar
type datatype
[recordindex]
id int PK
...
[recordvalues]
recordid int PK
fieldid int PK
value nvarchar
whenever messages are sent, I store which fields were mapped to what
variables for that deployment. So the query with a variable criteria
looks like this:
select coalesce(vm.value, rv.value)
from sentmessages sm
inner join variablemapping vm on vm.deploymentid=sm.deploymentid
left outer join recordvalues rv on
rv.recordid=sm.recordid and rv.fieldid=vm.fieldid
where coalesce(vm.value, rv.value) ...
this model works pretty well for searching messages with variable
criteria and looking up variable values for a particular message. the
big problem I have is that the recordvalues table is HUGE, 1 million
records with 50 fields each = 50 million recordvalues rows. The value,
two int columns plus the two indexes I have on the table make it into a
beast. Importing data takes forever. Querying the records (with a field
criteria) also takes longer than it should.
makes sense, the performance was largely IO bound.
I decided to try and cut into that IO. looking at a recordvalues table
with over 100 million rows in it, there were only about 3 million unique
values. so I split the recordvalues table into two tables:
[recordvalues]
recordid int PK
fieldid int PK
valueid int
[valueindex]
id int PK
value nvarchar (unique)
now, valueindex holds 3 million unique values and recordvalues
references them by id. to my suprise this shaved only 500mb off a 4gb
database!
importing didn't get any faster either, although it's no longer IO bound
it appears the cpu as the new bottleneck outweighed the IO bottleneck.
this is probably because I haven't optimized the queries for the new
tables (was hoping it wouldn't be so hard w/o the IO problem).
is there a better way to accomplish what I'm trying to do? (eliminate
the redundant data).. does SQL have built-in constructs to do stuff like
this? It seems like maybe I'm trying to duplicate functionality at a
high level that may already exist at a lower level.
IO is becoming a serious bottleneck.
the million record 50 field csv file is only 500mb. I would've thought
that after eliminating all the redundant first name, city, last name,
etc it would be less data and not 8x more!
-
Gordon
Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.comNo database vendor, version, platform... Why are you using tables as indexes
when you can use indexes as indexes? Normalizing a data model generally
slows it down. Denormalizing generally speeds things up.
"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40be960f$1_1@.Usenet.com...
> edit: this came out longer than I thought, any comments about anything
> here is greatly appreciated. thank you for reading
> My system stores millions of records, each with fields like firstname,
> lastname, email address, city, state, zip, along with any number of user
> defined fields. The application allows users to define message templates
> with variables. They can then select a template, and for each variable
> in the template, type in a value or select a field.
> The system allows you to query for messages you've sent by specifying
> criteria for the variables (not the fields).
> This requirement has made it difficult to normalize my datamodel at all
> for speed. What I have is this:
> [fieldindex]
> id int PK
> name nvarchar
> type datatype
> [recordindex]
> id int PK
> ...
> [recordvalues]
> recordid int PK
> fieldid int PK
> value nvarchar
> whenever messages are sent, I store which fields were mapped to what
> variables for that deployment. So the query with a variable criteria
> looks like this:
> select coalesce(vm.value, rv.value)
> from sentmessages sm
> inner join variablemapping vm on vm.deploymentid=sm.deploymentid
> left outer join recordvalues rv on
> rv.recordid=sm.recordid and rv.fieldid=vm.fieldid
> where coalesce(vm.value, rv.value) ...
> this model works pretty well for searching messages with variable
> criteria and looking up variable values for a particular message. the
> big problem I have is that the recordvalues table is HUGE, 1 million
> records with 50 fields each = 50 million recordvalues rows. The value,
> two int columns plus the two indexes I have on the table make it into a
> beast. Importing data takes forever. Querying the records (with a field
> criteria) also takes longer than it should.
> makes sense, the performance was largely IO bound.
> I decided to try and cut into that IO. looking at a recordvalues table
> with over 100 million rows in it, there were only about 3 million unique
> values. so I split the recordvalues table into two tables:
> [recordvalues]
> recordid int PK
> fieldid int PK
> valueid int
> [valueindex]
> id int PK
> value nvarchar (unique)
> now, valueindex holds 3 million unique values and recordvalues
> references them by id. to my suprise this shaved only 500mb off a 4gb
> database!
> importing didn't get any faster either, although it's no longer IO bound
> it appears the cpu as the new bottleneck outweighed the IO bottleneck.
> this is probably because I haven't optimized the queries for the new
> tables (was hoping it wouldn't be so hard w/o the IO problem).
> is there a better way to accomplish what I'm trying to do? (eliminate
> the redundant data).. does SQL have built-in constructs to do stuff like
> this? It seems like maybe I'm trying to duplicate functionality at a
> high level that may already exist at a lower level.
> IO is becoming a serious bottleneck.
> the million record 50 field csv file is only 500mb. I would've thought
> that after eliminating all the redundant first name, city, last name,
> etc it would be less data and not 8x more!
> -
> Gordon
> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com|||> No database vendor, version, platform... Why are you using tables as indexes
> when you can use indexes as indexes? Normalizing a data model generally
> slows it down. Denormalizing generally speeds things up.
sorry, I'm using MS SQL2000
How can I use indexes as indexes? I mean, in the example I posted, can
you give an example?
Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.com|||Well, I don't know SQL Server, but in Oracle, you create an index using the
CREATE INDEX statement. I suspect it works the same or similar in SQL
Server.
Here's an Oracle example that creates an index called
"asearch_client_id_idx" on the client_id field in a table called
"alphasearch" owned by user "alphasearch":
CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);
Now, I am about to lie a little bit for simplicity's sake, but here goes...
When a query is executed that uses client_id as a search or sort criteria,
the Oracle optimizer will decide whether or not to use the index. If it
does, it looks up the values needed in the index, and retrieves their row
ids, which in turn are essentially pointers to the location of the data in
data blocks on disc, so it goes dirctly to that location on disc and
retrieves the data out of the blocks. It does not need to go logically into
the table. Note that what I refer to as row_id in Oracle may not be the same
concept in SQL Server.
Hope you get the general idea, and you should consult your documentation
about indexes.
"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40bf5ec1$1_1@.Usenet.com...
> > No database vendor, version, platform... Why are you using tables as
indexes
> > when you can use indexes as indexes? Normalizing a data model generally
> > slows it down. Denormalizing generally speeds things up.
> sorry, I'm using MS SQL2000
> How can I use indexes as indexes? I mean, in the example I posted, can
> you give an example?
> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com|||> Well, I don't know SQL Server, but in Oracle, you create an index using the
> CREATE INDEX statement. I suspect it works the same or similar in SQL
> Server.
> Here's an Oracle example that creates an index called
> "asearch_client_id_idx" on the client_id field in a table called
> "alphasearch" owned by user "alphasearch":
> CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
> ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);
wow, what a concept ;)
I appreciate the criticism.. after all that's the intent of my original
post, however, I would prefer it to be of the constructive variety.
In my system, there are 'fields' and there are 'variables'. the user
creates the relationships between them whenever they send a message. in
order to search for messages by 'variable' values, sql needs a
relationship of its own to translate between them.
this has kept me from being able to use the obvious:
[records]
id,field1,field2,field3,...
because in a query for 'variable1', depending on the message it may have
to look at 'field3' or 'field4' for the value. this requirement is why I
have the tables I have now (recordindex, fieldindex and recordvalues).
I realize this makes for very large indexes.. and like you said, the
table itself is nothing more than a big index. This is the problem I'd
like to solve. In my original post I explained how I attempted to
eliminate redundant data, but I only eliminated 500mb (of 4gb) because
the majority of volume in this db isn't the data itself, but the index size.
Posted Via Usenet.com Premium Usenet Newsgroup Services
------------------
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
------------------
http://www.usenet.com|||"gordy" <gordy@.dynamicsdirect.com> wrote in message
news:40bf9c49$1_1@.Usenet.com...
> > Well, I don't know SQL Server, but in Oracle, you create an index using
the
> > CREATE INDEX statement. I suspect it works the same or similar in SQL
> > Server.
> > Here's an Oracle example that creates an index called
> > "asearch_client_id_idx" on the client_id field in a table called
> > "alphasearch" owned by user "alphasearch":
> > CREATE INDEX ALPHASEARCH.ASEARCH_CLIENT_ID_IDX
> > ON ALPHASEARCH.ALPHASEARCH(CLIENT_ID);
> wow, what a concept ;)
> I appreciate the criticism.. after all that's the intent of my original
> post, however, I would prefer it to be of the constructive variety.
It wasn't criticism. I thought you really didn't know about RDBMS indexes,
being that you essentially created your own.
> In my system, there are 'fields' and there are 'variables'. the user
> creates the relationships between them whenever they send a message. in
> order to search for messages by 'variable' values, sql needs a
> relationship of its own to translate between them.
> this has kept me from being able to use the obvious:
> [records]
> id,field1,field2,field3,...
> because in a query for 'variable1', depending on the message it may have
> to look at 'field3' or 'field4' for the value. this requirement is why I
> have the tables I have now (recordindex, fieldindex and recordvalues).
> I realize this makes for very large indexes.. and like you said, the
> table itself is nothing more than a big index. This is the problem I'd
> like to solve. In my original post I explained how I attempted to
> eliminate redundant data, but I only eliminated 500mb (of 4gb) because
> the majority of volume in this db isn't the data itself, but the index
size.
The only similar situation I've seen like this (home-brew index constructs)
is with a document imaging system called FileNET. In that case, the vendor
actually created its own mini RDBMS to handle just these index/table
constructs. It was very fast, but, of course, proprietary.
It's hard to tell exactly what you are trying to do, though. Could you get
into the business requirements a bit? It would help me to understand what
you need to do. It get the feeling from the solution you came up with that
you are a programmer, not a DBA.
> Posted Via Usenet.com Premium Usenet Newsgroup Services
> ------------------
> ** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
> ------------------
> http://www.usenet.com
Eliminating Dynamic SQL
The reason the store procedures use dynamic sql is because
the data that is need comes from another MS SQL database
that resides on the same server instance.
The following is a code example from a stored procedure:
declare @.theDatabase sysname;
set @.theDatabase = 'SomeDatabase';
declare @.thePrimaryKey int;
set @.thePrimaryKey = 2;
declare @.theSqlString varchar(8000);
set @.theSqlString = 'select r.Feild from ' +
@.theDatabase +
'..TableName r ' +
'where r.ID = '
+ rtrim( str( @.thePrimaryKey ) )
exec sp_executesql @.theSqlString;
The problem is that dynamic sql is EXTREMELY slow! I have
been doing some research and have found a little bit on
sp_linkedservers and OPENQUERY but I have not figured out
how to accomplish what is done above.
Can anyone give me an example of how I can use
sp_linkedservers and OPENQUERY to accomplish this query?
Would it be better, performance wise, to just continue
using dynamic sql?
Any help would be greatly appreciated.
Sincerely,
John Dickey
jpd0861@.msn.comJohn Dickey wrote:
> I am refactoring stored procedures that use dynamic sql.
> The reason the store procedures use dynamic sql is because
> the data that is need comes from another MS SQL database
> that resides on the same server instance.
> The following is a code example from a stored procedure:
> declare @.theDatabase sysname;
> set @.theDatabase = 'SomeDatabase';
> declare @.thePrimaryKey int;
> set @.thePrimaryKey = 2;
> declare @.theSqlString varchar(8000);
> set @.theSqlString = 'select r.Feild from ' +
> @.theDatabase +
> '..TableName r ' +
> 'where r.ID = '
> + rtrim( str( @.thePrimaryKey ) )
> exec sp_executesql @.theSqlString;
> The problem is that dynamic sql is EXTREMELY slow! I have
> been doing some research and have found a little bit on
> sp_linkedservers and OPENQUERY but I have not figured out
> how to accomplish what is done above.
> Can anyone give me an example of how I can use
> sp_linkedservers and OPENQUERY to accomplish this query?
> Would it be better, performance wise, to just continue
> using dynamic sql?
If you code the dynamic SQL properly, you can eliminate much of the
performance overhead. In your case, you're using sp_executesql, but are
not taking advantage of its main benefit over EXEC. That is, parameter
binding.
Instead of how it's currently coded, you could use:
set @.theSqlString = 'select r.Feild from ' +
'[' + @.theDatabase + ']' +
'..TableName r ' +
'where r.ID = @.thePrimaryKey'
Exec sp_executesql @.theSqlString, N'thePrimaryKey INT', @.thePrimaryKey
I'm not sure why it's necessary, though, given your example. You have a
hard-coded database name on the same server. If you know the database,
you can just fully-qualify the object in the SQL:
Select * from [AnotherDatabase].dbo.[MyTable]
David Gugick
Imceda Software
www.imceda.com|||Why do you want to parameterize the name of the target database?
If the name may change and/or you prefer not to hard-code it in SPs
then just create a view or views to reference the other database and
reference only the views in your SPs. That way the database name is
coded only in a few places instead of every SP.
David Portas
SQL Server MVP
--
Eliminating Dynamic SQL
The reason the store procedures use dynamic sql is because
the data that is need comes from another MS SQL database
that resides on the same server instance.
The following is a code example from a stored procedure:
declare @.theDatabase sysname;
set @.theDatabase = 'A database that I get based on
critera at runtime.';
declare @.thePrimaryKey int;
set @.thePrimaryKey = 2;
declare @.theSqlString varchar(8000);
set @.theSqlString = 'select r.Feild from ' +
@.theDatabase +
'..TableName r ' +
'where r.ID = '
+ rtrim( str( @.thePrimaryKey ) )
exec sp_executesql @.theSqlString;
The problem is that dynamic sql is EXTREMELY slow! I have
been doing some research and have found a little bit on
sp_linkedservers and OPENQUERY but I have not figured out
how to accomplish what is done above.
Can anyone give me an example of how I can use
sp_linkedservers and OPENQUERY to accomplish this query?
Would it be better, performance wise, to just continue
using dynamic sql?
Any help would be greatly appreciated.
Sincerely,
John Dickey
jpd0861@.msn.comIf you have the same table structures in each database then you can
create a partitioned view across them and reference the partitioned
view instead. Whether this is the right solution though may depend on
just why you have the data split across multiple DBs in the first
place. It doesn't seem like a very practical architecture if it forces
you to write dynamic SQL in all your production code.
David Portas
SQL Server MVP
--|||Thank you for your reply David.
The reason that we have different databases is because the
databases are for different applications that my
application interfaces with. They happen to be Great Plains
accounting databases that I am trying to retrive data from.
Can you give me more information about the partitioned view
and how I can set that up?
John Dickey
>--Original Message--
>If you have the same table structures in each database
then you can
>create a partitioned view across them and reference the
partitioned
>view instead. Whether this is the right solution though
may depend on
>just why you have the data split across multiple DBs in
the first
>place. It doesn't seem like a very practical architecture
if it forces
>you to write dynamic SQL in all your production code.
>--
>David Portas
>SQL Server MVP
>--
>.
>|||It's all in Books Online:
http://msdn.microsoft.com/library/e...des_06_17zr.asp
David Portas
SQL Server MVP
--
Eliminating Duplicates
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/defaul...kb;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.
Eliminating Duplicates
Here's the DDL for the tables in question:
CREATE TABLE [dbo].[Office] (
[OfficeID] [int] IDENTITY (1, 1) NOT NULL ,
[ParentOfficeID] [int] NOT NULL ,
[WebSiteID] [int] NOT NULL ,
[IsDisplayOnWeb] [bit] NOT NULL ,
[IsDisplayOnAdmin] [bit] NOT NULL ,
[OfficeStatus] [char] (1) NOT NULL ,
[DisplayORD] [smallint] NOT NULL ,
[OfficeTYPE] [varchar] (10) NOT NULL ,
[OfficeNM] [varchar] (50) NOT NULL ,
[OfficeDisplayNM] [varchar] (50) NOT NULL ,
[OfficeADDR1] [varchar] (50) NOT NULL ,
[OfficeADDR2] [varchar] (50) NOT NULL ,
[OfficeCityNM] [varchar] (50) NOT NULL ,
[OfficeStateCD] [char] (2) NOT NULL ,
[OfficePostalCD] [varchar] (15) NOT NULL ,
[OfficeIMG] [varchar] (100) NOT NULL ,
[OfficeIMGPath] [varchar] (100) NOT NULL ,
[RegionID] [int] NOT NULL ,
[OfficeTourURL] [varchar] (255) NULL ,
[GeoAreaID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[OfficeBrandedURL] [varchar] (255) NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OfficeManagement] (
[OfficeID] [int] NOT NULL ,
[PersonnelID] [int] NOT NULL ,
[JobTitleID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[SeqNBR] [int] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OfficeMls] (
[OfficeID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[OfficeMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Personnel] (
[PersonnelID] [int] IDENTITY (1, 1) NOT NULL ,
[PersonnelDisplayName] [varchar] (100) NOT NULL ,
[FirstNM] [varchar] (50) NOT NULL ,
[PreferredFirstNM] [varchar] (50) NOT NULL ,
[MiddleNM] [varchar] (50) NOT NULL ,
[LastNM] [varchar] (50) NOT NULL ,
[PersonalTaxID] [varchar] (9) NOT NULL ,
[HireDT] [datetime] NOT NULL ,
[TermDT] [datetime] NOT NULL ,
[HomePhoneNBR] [varchar] (15) NULL ,
[HomeADDR1] [varchar] (50) NOT NULL ,
[HomeADDR2] [varchar] (50) NOT NULL ,
[HomeCityNM] [varchar] (50) NOT NULL ,
[HomeStateCD] [char] (2) NOT NULL ,
[HomePostalCD] [varchar] (15) NOT NULL ,
[PersonnelLangCSV] [varchar] (500) NOT NULL ,
[PersonnelSlogan] [varchar] (500) NOT NULL ,
[BGColor] [varchar] (50) NOT NULL ,
[IsEAgent] [bit] NOT NULL ,
[IsArchAgent] [bit] NOT NULL ,
[IsOptOut] [bit] NOT NULL ,
[IsDispOnlyPrefFirstNM] [bit] NOT NULL ,
[IsHideMyListingLink] [bit] NOT NULL ,
[IsPreviewsSpecialist] [bit] NOT NULL ,
[AudioFileNM] [varchar] (100) NULL ,
[iProviderID] [int] NOT NULL ,
[DRENumber] [varchar] (10) NOT NULL ,
[AgentBrandedURL] [varchar] (255) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[IsDisplayAwards] [bit] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[PersonnelMLS] (
[PersonnelID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[AgentMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Office] ADD
CONSTRAINT [FK_Office_OfficeProfile] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[OfficeProfile] (
[OfficeID]
) NOT FOR REPLICATION
GO
alter table [dbo].[Office] nocheck constraint [FK_Office_OfficeProfile]
GO
ALTER TABLE [dbo].[OfficeManagement] ADD
CONSTRAINT [FK_OfficeManagement_LookupJobTitle] FOREIGN KEY
(
[JobTitleID]
) REFERENCES [dbo].[LookupJobTitle] (
[JobTitleID]
),
CONSTRAINT [FK_OfficeManagement_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_OfficeManagement_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) ON DELETE CASCADE
GO
alter table [dbo].[OfficeManagement] nocheck constraint [FK_OfficeManagement_Office]
GO
ALTER TABLE [dbo].[OfficeMls] ADD
CONSTRAINT [FK_OfficeMls_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION
GO
alter table [dbo].[OfficeMls] nocheck constraint [FK_OfficeMls_Office]
GO
ALTER TABLE [dbo].[PersonnelMLS] ADD
CONSTRAINT [FK_PersonnelMLS_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) NOT FOR REPLICATION
GO
alter table [dbo].[PersonnelMLS] nocheck constraint [FK_PersonnelMLS_Personnel]
GO
Here's the query I'm having trouble with:
SELECT distinct Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID AS OfficeBoard,
PersonnelMLS.SourceID AS AgentBoard
FROM Personnel INNER JOIN
OfficeManagement ON
Personnel.PersonnelID = OfficeManagement.PersonnelID
INNER JOIN
Office ON OfficeManagement.OfficeID = Office.OfficeID
INNER JOIN
OfficeMls ON Office.OfficeID = OfficeMls.OfficeID
INNER JOIN
PersonnelMLS ON Personnel.PersonnelID = PersonnelMLS.PersonnelID
where officemls.sourceid <> personnelmls.sourceid
and office.officenm not like ('%admin%')
group by PersonnelMLS.SourceID,
Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID
order by office.officenm
What I'm trying to retrieve are those agents who have source id's that are not in the Office's domain of valid source id's. Here's a small portion of the results:
PersonnelID FirstNM LastNM OfficeNM OfficeID OfficeBoard AgentBoard
---- ---------------- ---------------- ---------------- ---- ---- ----
18205 Margaret Peggy Quattro Aventura North 650 906 908
18205 Margaret Peggy Quattro Aventura North 650 918 908
15503 Susan Jordan Blackburn Point 889 920 909
15503 Susan Jordan Blackburn Point 889 921 909
15503 Susan Jordan Blackburn Point 889 921 920
15279 Sandra Humphrey Boca Beach North 890 917 906
15279 Sandra Humphrey Boca Beach North 890 906 917
15279 Sandra Humphrey Boca Beaches 626 917 906
15279 Sandra Humphrey Boca Beaches 626 906 917
13532 Michael Demcho Boca Downtown 735 906 917
14133 Maria Ford Boca Downtown 735 906 917
19126 Michael Silverman Boca Glades Road 736 917 906
18920 Beth Schwartz Boca Glades Road 736 906 917
If you take a look at Sandra Humphries, you'll see she's out of office 626. Office 626 is associated with source id's 907 and 916. Sandra Humphries is also associated with those two source id's , but she shows up in the results.
I know this was AWFULLY long winded, but just wanted to make sure made myself as clear as possible.
Any help would be greatly appreciated.
Thanks in advance!Could you edit the post and wrap [ code] [ /code ] tags around it?
Just reomve the spaces in the tags|||You're right...that is a LOT better!! Thanks for the tip...|||Do the same with the data...
Why are you doing a GROUP BY? To get DISTINCT data?
EDIT: And can you post the DDL for the tables|||...and I guess I really don't need the GROUP BY...|||As far as i can see .. the rows are not exactly duplicates ... OfficeBoard id seems to be differing ... a joi n problem .. maybe ??|||didnt read the post completely .. my bad !!!
Didnt get what exactly you were trying to accomplish ...
How do you decide that Sandra Humphries is outta office 626 ?|||Office 626 is the office she's associated with - it's set up in the main table. Been working on the join and can't seem to work out the bugs on this one...|||you could try replacing
"where officemls.sourceid <> personnelmls.sourceid"
with
"where personnelmls.sourceid not in
(select sourceid from officemls omls join office o on omls.officeid = o.officeid
where officemanagement.officeid = omls.officeid)"
This will give you duplicates because of the "distinct" but only the wrong personnelmls.sourceid should be selected.
Eliminating Duplicates
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.
Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/default...b;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.
Eliminating Duplicates
values in it. I have 2 other fields called CAU and SUF. I may have 3 rows
with the same PID and same CAU, but the SUF field when concatenated with the
2 aforementioned fields make the row unique.
I want to know how to select 1 of the 3 rows but not the other 2. In
otherwords I need 1 row for each SUF. Hopes this makes sense and I would
appreciate any help.Try the following query. Work from the inner select out.
CREATE TABLE OneSUF
(
PID INT NOT NULL,
CAU INT NOT NULL,
SUF INT NOT NULL,
Description VARCHAR(255) NOT NULL
)
INSERT INTO OneSUF VALUES (1, 2, 3, '1, 2, 3')
INSERT INTO OneSUF VALUES (1, 2, 4, '1, 2, 4')
INSERT INTO OneSUF VALUES (1, 2, 5, '1, 2, 5')
INSERT INTO OneSUF VALUES (1, 3, 1, '1, 3, 1')
INSERT INTO OneSUF VALUES (1, 3, 2, '1, 3, 2')
INSERT INTO OneSUF VALUES (1, 3, 3, '1, 3, 3')
INSERT INTO OneSUF VALUES (1, 4, 3, '1, 4, 3')
INSERT INTO OneSUF VALUES (1, 4, 4, '1, 4, 4')
INSERT INTO OneSUF VALUES (1, 4, 5, '1, 4, 5')
INSERT INTO OneSUF VALUES (1, 4, 6, '1, 4, 6')
SELECT SUF3.SUF, SUF3.CAU, SUF3.PID, SUF3.Description
FROM OneSUF AS SUF3 INNER JOIN
(
SELECT SUF1.SUF, SUF1.CAU, MIN(SUF1.PID) AS PID FROM OneSUF AS SUF1 INNER
JOIN
(SELECT SUF, MIN(CAU) AS CAU FROM OneSUF GROUP BY SUF) AS SUF2
ON SUF1.SUF = SUF2.SUF AND SUF1.CAU = SUF2.CAU
GROUP BY SUF1.SUF, SUF1.CAU) AS SUF4
ON SUF3.SUF = SUF4.SUF AND SUF3.CAU = SUF4.CAU AND SUF3.PID = SUF4.PID
ORDER BY 1,2,3
--
Barry McAuslin
Look inside your SQL Server files with SQL File Explorer.
Go to http://www.sqlfe.com for more information.
"Jeff Humphrey" <jeffhumphrey@.cox-internet.com> wrote in message
news:%23OzmAYOmDHA.2416@.TK2MSFTNGP10.phx.gbl...
> I have a table with a field called PID. This field can have duplicate
> values in it. I have 2 other fields called CAU and SUF. I may have 3
rows
> with the same PID and same CAU, but the SUF field when concatenated with
the
> 2 aforementioned fields make the row unique.
> I want to know how to select 1 of the 3 rows but not the other 2. In
> otherwords I need 1 row for each SUF. Hopes this makes sense and I would
> appreciate any help.
>
Eliminating Duplicates
duplicates are in 3 columns that were supposed to be primary key columns
(Developers forgot to create the primary key in development-production is
OK).
I figured the duplicates with a query but I don't know how to delete them
from the table.
Can anyone help ? Thanks.Hi,
See this:-
http://www.sqlteam.com/item.asp?ItemID=3331
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=6256
http://support.microsoft.com/default.aspx?scid=kb;en-us;139444
Thanks
Hari
SQL Server MVP
"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:DAF8776C-7644-4B3A-83BC-4C6B02FF2607@.microsoft.com...
>I have a table with 580 rows of duplicate on total of 1101 rows. The
> duplicates are in 3 columns that were supposed to be primary key columns
> (Developers forgot to create the primary key in development-production is
> OK).
> I figured the duplicates with a query but I don't know how to delete them
> from the table.
> Can anyone help ? Thanks.
Eliminating Duplicated
Emp-Cl (empno,noofleaves). Now i require the report in the following format
Sample Data
employee
1 ravi kumar rajahmundry
2 ravindra kakinada
emp-pl emp-cl
empno noofleave empno noofleave
1 2 1 4
1 3 1 2
2 4 1 3
2 1 2 1
2 1
EMPNO NAME CL PL
1 ravikumar 4 2
2 3
3
2 ravindra 1 4
1
1
"venkataramarao" <venkataramarao@.discussions.microsoft.com> wrote in message
news:AC5F930E-5EE5-41B2-AB7B-C809082EF5C7@.microsoft.com...
>I have 3 tables Employee (empno,name,city) , Emp-Pl (empno,noofleaves) ,
>and
> Emp-Cl (empno,noofleaves). Now i require the report in the following
> format
> Sample Data
> employee
> 1 ravi kumar rajahmundry
> 2 ravindra kakinada
> emp-pl emp-cl
> empno noofleave empno noofleave
> 1 2 1 4
> 1 3 1 2
> 2 4 1 3
> 2 1 2 1
> 2 1
> EMPNO NAME CL PL
> 1 ravikumar 4 2
> 2 3
> 3
> 2 ravindra 1 4
> 1
> 1
I don't understand your requirements. Why does data from the CL table appear
in both the CL and PL columns? What determines this output?
Please post DDL and post sample data as INSERT statements rather than
sketches of tables.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
eliminating duplicate records from a table
Hi All
I am having problem in selecting the data from oracle. The problem is there are multiple enteries per customer, I want to select the latest updated value. I know the customer ids which has multiple entries corresssponding to them.
The columns are Cust_ID, Update_Date, Cust_Name,Cust_address.
Select ADDRESS_LINE1, LAST_UPDATE_DATE,ADDRESS_LINE2, ADDRESS_LINE3, CITY, STATE, ZIP,COUNTRY
from Customer_Table s where s.cust_id in (101,102,103,104,105,106)
Group By ADDRESS_LINE1, LAST_UPDATE_DATE,ADDRESS_LINE2, ADDRESS_LINE3, CITY, STATE, ZIP,COUNTRY
having .........(no idea)
I know this is not correct as I am still getting multiple records.
Any help will be useful.
Well, we don't really do oracle around these parts, but...
You should be able to say
count(*) > 1
in the having clause to get groups where there are > 1 rows in there.
|||I guess that the later means the one with latest LAST_UPDATE. I do not know if this works for "oracle", but it does for SQL Server.
select *
from Customer_Table s
where s.cust_id in (101,102,103,104,105,106)
and last_update = (
select max(a.last_update)
from Customer_Table as a
where a.cust_id = s.cust_id
)
go
AMB
|||Thanks for replying..I am sorry fro posting oracle query here..
I found the solution.....Thanks a lot...
Eliminating Duplicate Records
Hi
How can i eliminate duplicate records from a single table, when i use a query that links multipla tables together?
There is a surrogate key TABLE1 and i just can't seem to wrap my head around the layout.
TABLE1
ID | memQuestionaireID | QuestscoreID | DateOfAssessment | etc.
TABLE2
memQuestionaireID | MemberID | Answer1 | Answer2 | etc.
TABLE3
MemberID | Name | Surname | IDNo | etc.
t1.memQuestionaireID = t2.memQuestionaireID
t2.MemberID = t3.MemberID
That's how the tables link, obvious enough. How can i delete duplicate records from TABLE1 where MemberID occurs more than once and the t1.DateOfAssessment = MAX(DateOfAssessment)
The ID is unique, but there are duplicate memberID's with different date of assessments, i want to take the last used date of assessment and keep only that one, and remove the rest from the table?
I'm sure the query is simple, i'm just too stupid to see how to solve it!!
Any help would be greatly appreciated.
I'm not very clued up on how to remove records or update records, Please help.
Kind Regards
Carel Greaves
See if this does what you need...
Code Snippet
DELETE t1
FROM Table1 t1
INNER JOIN Table2
on t1.memQuestionaireID = t2.memQuestionaireID
INNER JOIN
(
SELECT t2b.MemberID, max(t1b.DateOfAssessment) as DateOfAssessment
FROM table1 t1b
INNER JOIN table2 t2b
ON t1b.memQuestionaireID = t2b.memQuestionaireID
AND t2b.MemberID in
(
SELECT MemberID
FROM Table3
GROUP BY MemberID
HAVING count(*)>1
)
GROUP BY t2b.MemberID
) as list
ON t1.DateOfAssessment < list.DateOfAssessment
and t2.MemberID = list.MemberID
To view what will be delete w/o actually deleting, just change the first line from DELETE t1 to SELECT t1.*
|||
I'm a bit confused Carel,
Call me dense today, but it seems like the MemberID 'should' be in Table1 since it is the Member that has a memQuestionaireID, has a QuestScoreID and a DateOfAssessment. Then for Table2, it is the Questionaire that has a set of answers, etc. Please help me out here...
It seems like your Table1 has a one-to-many relationship with Table2. I'm assuming that a Member may have multiple instances of the same questionnaire (re-takes, etc.)
Is one questionnaire (MemQuestionnaireID) with a QuestScoreID and a DateOfAssessment directly linked to one or more set(s) of answers in Table2?
Is the QuestScoreID changed upon each re-take? (Shouldn't the score be directly related to a set of answers?)
I ask this because it seems from the design you presented that perhaps you want to also delete rows from Table2...
|||I know the database structure is a bit stupid. Not my design!
I am only trying to remove duplicate entries in the QuestScore table.
Problem:
A member fills in a Questionaire - MemQuestionaires
The Querstionaire's Score is placed in the QuestScore Table.
There are duplicate entries in the QuestScore table with regards to a questionaireID from the memQuestionaires Table
If a duplicate exists, then the members can't log into there profiles anymore, so i need to remove the duplicates so that the members can access their profiles again.
I'm quite new to all of this, so any help would be greatly appreciated
Kind Regards
Carel Greaves.
|||You gave us Table1, Table2, Table3...I'm guessing Table1 is QuestScore?
Did my code above work? If not, what was not quite right?
Are there multiple entries just in Table1, or is it Table3?
Please clarify and maybe post some sample data...
|||Okay, so i'm a bito f a moron :-) he he
Here is the code i used to find the duplicates
SELECT me.MemberID, /*me.QuestionaireID,*/ q.memQuestionaireID, q.*, me.*
FROM questscore q, memQuestionaires m, Members me
WHERE q.memquestionaireID = m.memquestionaireID
AND m.MemberiD = me.memberid
AND me.ClientID = 124
ANd m.questionaireid = 85
AND q.sectionid >= 1000000
ORDER BY me.memberID
Here is my results:
i have marked the duplicate values in red that i want to eliminate.
I need to eliminate only the record with the oldest date, and keep the newer one.
Hopefully when i've gone for my SQL course then i will only be half a moron :-) he he
Thanks for helping me out guys.
Kind Regards
Carel Greaves
|||Wow, talk about a 'bolixed' up database...
I'm still confused, in your query above, you have a WHERE clause that indicates the results should be filtered down to QuestionaireID = 85, but your duplicate records are for memQuestionaireID 11129. What is the difference between QuestionaireID and memQuestionaireID? How did that happen?
Also, the DateCompleted is obviously not a date, but a number of hours. Is that the number of hours since some event? How are the hours determined?
Which row is the 'newer one'?
It would be very helpful, and keep us from wasting our time, if you would post the DDL for each of the tables, as well as some sample data for each table (in the form of INSERT statements.)
As you seen, you have folks sincerely attempting to help you, and to this point wasting our time because we just don't have the 'full' picture. You've inherited a very 'odd' database, and you need to help us so we can better help you.
|||Here is sample data in the Tables.
Member Table
memQuestionaires Table
QuestScore Table
When i executed this statement, i only used these three tables.
SELECT me.MemberID, /*me.QuestionaireID,*/ q.memQuestionaireID, q.*, me.*
FROM questscore q, memQuestionaires m, Members me
WHERE q.memquestionaireID = m.memquestionaireID
AND m.MemberiD = me.memberid
AND me.ClientID = 124
ANd m.questionaireid = 85
AND q.sectionid >= 1000000
ORDER BY me.memberID
I am only trying to remove the duplicate records for where ClientID = 124
AND QuestionaireID = 85
as for the 1000000, there it is just to narrow down the search for myself, there are a lot of duplicates.
There are a lot of members -> Members Table
Each member can fill in a number of questionaires -> QuestionaireID on memQuestionaires Table in this case 85
And all the questionaire's data is stored in the QuestScore Table which is linked to the memQuestionaires table using the memQuestionairesID.
There seems to be duplicate values in the QuestScore Table referring to the members.
The date completed is an actual DATETIME field in the database, i used excel to transfer the data to here. The newer date will be a actual date with the time in the database i.e. 2007/05/23 02:05:11
All the fields get inserted into the database via a application which i haven't seen.
I'm new to the company and am trying to figure a lot of it out for myself too.
If there is any more information you need, please don't hesitate to ask.
|||try this one,DECLARE @.QuestScore TABLE (
QuestScoreID int
, memQuestionaireID int
, DateCompleted smalldatetime
)
DECLARE @.MemQuestionaire TABLE (
MemQuestionaireID int
, MemberID int
, QuestionaireID int
, UpdateDate smalldatetime
)
INSERT
INTO @.QuestScore
SELECT 1, 1, dateadd(day,-1, getdate()) UNION ALL
SELECT 2, 2, dateadd(day,-1, getdate()) UNION ALL
SELECT 3, 3, dateadd(day,-3, getdate())
INSERT
INTO @.MemQuestionaire
SELECT 1, 1, 4, dateadd(day,-1, getdate()) UNION ALL
SELECT 2, 1, 4, dateadd(day,-1, getdate()) UNION ALL
SELECT 3, 2, 5, dateadd(day,-3, getdate())
select *
from @.memquestionaire
select *
from @.questscore
declare @.tobedeleted table( -- create a table var for the data to be deleted
memquestionaireid int
, datecompleted smalldatetime
)
insert
into @.tobedeleted -- the memquestionaireid and datecompleted to be deleted
select MIN(qq.memquestionaireid) as memquestionaireid -- just in case if they have the same date, get the lowest id
, mq.datecompleted
from @.questscore qq inner join
(
select m.memberid
, m.questionaireid
, MIN(q.datecompleted) as datecompleted -- delete the lowest date
from @.memquestionaire m inner join
@.questscore q on m.memquestionaireid = q.memquestionaireid
group by
m.memberid
, m.questionaireid
having count(m.memberid) > 1 -- member having more than 1 quest score
) mq on qq.datecompleted = mq.datecompleted
group by
mq.datecompleted
delete @.questscore -- delete in questscore
from @.questscore q inner join
@.tobedeleted dq on q.memquestionaireid = dq.memquestionaireid
and q.datecompleted = dq.datecompleted
delete @.memquestionaire -- delete in memquestionaire
from @.memquestionaire m inner join
@.tobedeleted dq on m.memquestionaireid = dq.memquestionaireid
select *
from @.questscore
select *
from @.memquestionaire|||
Thanks, that's awesome, that's why i come to you guys for help.
Another quick question, that i think it might be easier to do, i just can't do it. (I just thought of it now)
If i added memQuestionaireID and SectionID together to create a unique field, and then filter out the duplicates by taking out the old dates within the duplicates it might be easier, but how would i go about accomplishing this?
I don't know how to compare the dates against itself to get keep the new date and get rid of the old date?
sorry for all of this i'm just trying to learn from all the things myself?
|||Thanks, that was a start.
Here is an example of the 'best' form for DDL and sample data. In this fashion, each person that wants to try and help you can just run this code and have 'your' problem to work with. Without this, folks are turned away because it takes so much time and effort to duplicate your situation -and helps to keep us from running off on tangents that don't really help you.
Now if you would only add to the sample data so that examples of the duplicate records you want to find and delete are represented, we 'should' be able to help you.
Code Snippet
DECLARE @.Members table
( med_aid int,
MemberID int,
ClientID int,
Name varchar(20),
Surname varchar(20),
Username varchar(100),
Password varchar(20)
)
INSERT INTO @.Members VALUES ( NULL, 10000000, 1, 'NiftyName', 'NiftySurname', 'his', 'test' )
INSERT INTO @.Members VALUES ( NULL, 10000001, 5, 'NiftyTest2', 'Surname2', 'nifty@.ybo.oa', 'JBCYWO' )
INSERT INTO @.Members VALUES ( NULL, 10000002, 5, 'KeithTest', 'WilsonTest', 'willi', 'willi' )
DECLARE @.memQuestionaires table
( memQuestionaireID int,
MemberID int,
QuestionaireID int,
LastPage int,
Paused int,
Complete int,
MaxPages int,
Sent int,
LastSaved int,
InitMail int,
ReceiveMail int,
UpdateDate datetime
)
INSERT INTO @.memQuestionaires VALUES ( 871, 10000000, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )
INSERT INTO @.memQuestionaires VALUES ( 872, 10000001, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )
INSERT INTO @.memQuestionaires VALUES ( 873, 10000002, 85, 1, 0, 0, 1, 1, 0, 1, 1, '2007/02/25 06:23' )
DECLARE @.QuestScore table
( QuestScoreID int,
memQuestionaireID int,
AnswerID int,
AnswerValue int,
SectionID int,
Page int,
QuestType int,
AnswerText int,
AnswerShort int,
Updatedate int,
DefaultValue int,
DateCompleted datetime
)
INSERT INTO @.QuestScore VALUES ( 4641, 871, 0, -9, 361, 1, 9, '', '', NULL, NULL, '2007/02/07 22:31' )
INSERT INTO @.QuestScore VALUES ( 4642, 872, 0, -5, 362, 1, 6, '', '', NULL, NULL, '2007/02/07 22:31' )
INSERT INTO @.QuestScore VALUES ( 4643, 873, 0, 0, 345, 1, 2, '', '', NULL, NULL, '2007/02/07 22:31' )

Sorry i'm sending you guys on a wild goose chase like this, and thanks a lot Arnie, i'd buy you a case of beer if i knew where you were.
I'm really learning a lot from this!
I just thought that it might be easier, if i had to add the memQuestionaireID and SectionID together in the QuestScore table to get a single unique value and remove the duplicates based on the newer unique value according to the Date, i.e. Only removing the old dates.
Sorry about this, i only saw it now.
That way i'm only using a single table.
|||Thanks,
And my desk is under the stuff somewhere, or so it was a few months ago...
And UPS serves Portland, OR! But it would be much more enjoyable to share, so if you can wrangle it, start lobbying management to let you come to the PASS (Professional Association for SQL Server) Conference in September. (http://sqlpass.org/)
|||Cummon guys, i'm a newbie at SQL Server in South Africa and it's late at night and my CEO's say i can't go home until i get this problem sorted out.
Please could you guys help me out quick?
Eliminating Combinatorial Relationship Multiplication
are arranged in a tree. Each organization has only one parent
organization but a user maybe a member of multiple organizations.
The problem that I'm facing that both organizations and individual
users may have relationships with other entities which are
semantically the same. For instance, an individual user can purchase
things and so can an organization. An individual user can have
business partners and so can an organization. So it seems that I would
need to have a duplicate set of link tables that link a user to a
purchase and then a parallel link table linking an organization to a
purchase. If I have N entities with which both users and organizations
may have relationships then I need 2*N link tables. There is nothing
wrong with that per se but just not elegant to have two different
tables for a relationship which is the same in nature, e.g.
purchaser->purchaseditem.
One other approach I was thinking of is to create an intermediate
entity (say it's called "holder") that will be used to hold references
to all the relationships that both an organization and an individual
may have. There will be 2 link tables linking organizations to
"holder" and users to "holder". Holder will in turn reference the
purchases, partners and so on. In this case the number of link tables
will be N+2 as opposed to 2*N but it will have a performance cost of
an extra join.
Is there a better way of modelling this notion of 2 different entities
that can possess similar relationships with N other entities?"Jeff Lanfield" <jlanfield2003@.yahoo.com> wrote in message
news:235c483f.0406301220.1e41d7c4@.posting.google.c om...
> Suppose I have users that can belong to organizations. Organizations
> are arranged in a tree. Each organization has only one parent
> organization but a user maybe a member of multiple organizations.
> The problem that I'm facing that both organizations and individual
> users may have relationships with other entities which are
> semantically the same. For instance, an individual user can purchase
> things and so can an organization. An individual user can have
> business partners and so can an organization. So it seems that I would
> need to have a duplicate set of link tables that link a user to a
> purchase and then a parallel link table linking an organization to a
> purchase. If I have N entities with which both users and organizations
> may have relationships then I need 2*N link tables. There is nothing
> wrong with that per se but just not elegant to have two different
> tables for a relationship which is the same in nature, e.g.
> purchaser->purchaseditem.
> One other approach I was thinking of is to create an intermediate
> entity (say it's called "holder") that will be used to hold references
> to all the relationships that both an organization and an individual
> may have. There will be 2 link tables linking organizations to
> "holder" and users to "holder". Holder will in turn reference the
> purchases, partners and so on. In this case the number of link tables
> will be N+2 as opposed to 2*N but it will have a performance cost of
> an extra join.
This is common scenario for a Customer, for example, where the Customer can
then be either an organization or an individual.
> Is there a better way of modelling this notion of 2 different entities
> that can possess similar relationships with N other entities?
If you are just looking to model it, then using an interface/implementation
approach would work where you have Customer (Holder) as an interface and
Organization and Person both implementing that interface. You could toss in
an abstract class (partial implementation of the Interface) for the
relationship implementations and extend that too. This is more handily
modeled in UML OO types of diagrams than with relations, it seems to me.
But, if you are looking to model it in order to implement in a SQL database
then I think you are headed down the right path with your Holder pattern,
although there could be other approaches that I'm missing too. I'd suggest
putting some work into picking a meaningful name for the Holder relation
(for example, it makes complete sense to everyone that a Customer could be a
person or an org).
Cheers! --dawn|||"Jeff Lanfield" <jlanfield2003@.yahoo.com> wrote in message
news:235c483f.0406301220.1e41d7c4@.posting.google.c om...
> Suppose I have users that can belong to organizations. Organizations
> are arranged in a tree. Each organization has only one parent
> organization but a user maybe a member of multiple organizations.
> The problem that I'm facing that both organizations and individual
> users may have relationships with other entities which are
> semantically the same. For instance, an individual user can purchase
> things and so can an organization. An individual user can have
> business partners and so can an organization. So it seems that I would
> need to have a duplicate set of link tables that link a user to a
> purchase and then a parallel link table linking an organization to a
> purchase. If I have N entities with which both users and organizations
> may have relationships then I need 2*N link tables. There is nothing
> wrong with that per se but just not elegant to have two different
> tables for a relationship which is the same in nature, e.g.
> purchaser->purchaseditem.
> One other approach I was thinking of is to create an intermediate
> entity (say it's called "holder") that will be used to hold references
> to all the relationships that both an organization and an individual
> may have. There will be 2 link tables linking organizations to
> "holder" and users to "holder". Holder will in turn reference the
> purchases, partners and so on. In this case the number of link tables
> will be N+2 as opposed to 2*N but it will have a performance cost of
> an extra join.
> Is there a better way of modelling this notion of 2 different entities
> that can possess similar relationships with N other entities?
You need to convert the following into an ERD. I've supplied particpation
(may) constraints and cardinality constraints. Once the ERD is done, it is a
snap to convert to a relational schema.
PEOPLE n (may) have m BUSINESS_PARTNERS
ORGANIZATIONS n (may) have BUSINESS_PARTNERS
PEOPLE n (may) belong_to m ORGANIZATIONS
PEOPLE 1 (may) buy n GOODS
ORGANIZATIONS 1 (may) buy n GOODS
So, in mock pseudo ERD form:
PEOPLE == >have m:n ==> BUSINESS_PARTNERS <== n:m have <== ORGANIZATIONS
" ==> people_order ==> 1:m GOODS m:1
<== orgs_order <== "
So, you wind up with
PEOPLE
person_id PK
etc
ORGANIZATIONS
org_id PK
etc
BUSINESS_PARTNERS
person_id PK (FK)
org_id PK (FK)
GOODS
item_id (PK)
description
etc
PEOPLE_ORDER
order_id PK
person_id (FK)
item_id (FK)
order_date
etc
You can now either create an ORGS_ORDER table, or add in org_id to the
PEOPLE_ORDER table, and change the table name to ORDER. This is a decsion
based on many things, primarily the semantics of person_id and org_id. If
they are mutually exclusive, then you can combine the tables simply by
adding org_id. If they are not mutually exclusive, a horrible option is to
add a flag to the ORDERS table, indicating whether the order is from a
person or an organization. Don't do it.|||Here is the link on Amazon.com for my new book on "Trees & Hierarchies
in SQL"
http://www.amazon.com/exec/obidos/t...product-details
Separate the tree structure from the nodes. Ilike the nested sets
model for the structure, but you can pick whatever works best for your
situation. Then the nodes can go into another table.
The classic scenario calls for a root class with all the common
attributes and then specialized sub-classes under it. As an example,
let's take the class of Vehicles and find an industry standard
identifier (VIN), and add two mutually exclusive sub-classes, Sport
utility vehicles and sedans ('SUV', 'SED').
CREATE TABLE Vehicles
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) NOT NULL
CHECK(vehicle_type IN ('SUV', 'SED')),
UNIQUE (vin, vehicle_type),
..);
Notice the overlapping candidate keys. I then use a compound candidate
key (vin, vehicle_type) and a constraint in each sub-class table to
assure that the vehicle_type is locked and agrees with the Vehicles
table. Add some DRI actions and you are done:
CREATE TABLE SUV
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
CHECK(vehicle_type = 'SUV'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE Sedans
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
CHECK(vehicle_type = 'SED'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
I can continue to build a hierarchy like this. For example, if I had
a Sedans table that broke down into two-door and four-door sedans, I
could a schema like this:
CREATE TABLE Sedans
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE TwoDoor
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
CHECK(vehicle_type = '2DR'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Sedans(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE FourDoor
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
CHECK(vehicle_type = '4DR'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Sedans (vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
The idea is to build a chain of identifiers and types in a UNIQUE()
constraint that go up the tree when you use a REFERENCES constraint.
Obviously, you can do variants of this trick to get different class
structures.
If an entity doesn't have to be exclusively one subtype, you play with
the root of the class hierarchy:
CREATE TABLE Vehicles
(vin CHAR(17) NOT NULL,
vehicle_type CHAR(3) NOT NULL
CHECK(vehicle_type IN ('SUV', 'SED')),
PRIMARY KEY (vin, vehicle_type),
..);
Now start hiding all this stuff in VIEWs immediately and add an
INSTEAD OF trigger to those VIEWs.|||--CELKO-- wrote:
> Here is the link on Amazon.com for my new book on "Trees & Hierarchies
> in SQL"
> http://www.amazon.com/exec/obidos/t...product-details
> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
<snip a clever way of storing tree structures
I have a couple of questions:
1. What's to stop you putting extra information into the Sedans or SUV
tables (like descriptions etc.) instead of creating separate tables?
2. Do you think it would be good to have primary keys that span two
tables in order to get rid of the vehicles table? Would there be any
disadvantage to doing it this way instead of the way you outlined?
John
PS: Apologies in advance for (predictably) missing obvious things.|||>> 1. What's to stop you putting extra information into the Sedans or
SUV tables (like descriptions etc.) instead of creating separate tables?
<<
If you don't need further subclass breakdowns, then by all means stop at
this level and get all the attributes in the table. In your case,
organizations might have tax status codes which are different from
individuals, so the Customers references the Organzations table and the
Organzations table has a column for tax status which is not in the
Individuals table.
>> 2. Do you think it would be good to have primary keys that span two
tables in order to get rid of the vehicles table? Would there be any
disadvantage to doing it this way instead of the way you outlined? <<
The reason for having a Vehicles table is to establish a class that is
then broken down into disjoint sub-classses; this is the mechanism that
assures a VIN belongs to SUV or Sedans but never both.
What I am proposing is a bit complicated at first sight. You have a
nested set (or whatever) model for the tree structure that might look
like this:
CREATE TABLE Heirarchy
(vin CHAR(17) UNIQUE -- not null? default?
REFERENCES Vehicles (vin)
ON UPDATE CASCADE
ON DELETE ??, -- need a rule
lft INTEGER NOT NULL,
rgt INTEGER NOT NULL,
PRIMARY KEY (lft, rgt),
<<more constraints -- see book>>);
You now have to sit down and really think about the business rules for
the Heirarchy.
The nodes (vehicles) have their class hierarchy on that side of the
RDBMS. You do joins from OrgChart (structure)-> Vehicles (node class)
-> SUV or Sedans (node sub-class) to get your data.
--CELKO--
===========================
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are.
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!|||> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
> The classic scenario calls for a root class with all the common
> attributes and then specialized sub-classes under it.
The problem I described was not so much with modelling the tree
(thanks partly to tips from your book which I do own) but with 2
entities having semantically identical relationships with N other
entities. In my case the 2 entities are nodes and leaves -
organizations and users.
For simplicity of the example I'll use the path enumeration model (I
know nested set is better).
create table organizations (int id, varchar name ... etc)
create table users (int id, varchar name ... etc)
create table orgtree (varchar path, int orgId)
create table members (int orgId, int userId) // which organizations a
user belongs to
create table partners (int id, ... etc)
create table purchases (int id, ... etc)
create table customers (int id, ... etc)
The problem is that both users and organizations can have
relationships with partners, purchases, and customers. Thus I have to
have 2*N = 6 link tables to represent that. This example is
simplified, in reality I have about 12 entities with which both
organizations and users can have relationships so I have to have 24
link tables. Nothing wrong with that per se but I just think it is
inelegant.
In OO technology there are standard solutions for this kind of thing
but in SQL the only approach I could think of is to introduce an
"intermediate" entity (say it's called "holder") which will hold the
references to the relationships that both a user and an organization
can have. Users and Organizations can then in turn have a relationship
with this "holder" entity. Thus the number of link tables will be N +
2 or 14 instead of 24.
My question is: Are there any other approaches to this problem? Seems
like a fairly common issue.
Thanks!
Jeff
P.S. CELKO,
I have to deal with tree stuff in SQL a lot and I originally I bought
your book simply because there was nothing else on the topic and I
looked hard both recently and in the past! I did not have high
expectations because I thought it was just a quickie to capitalize on
success of "SQL for smarties" but I was pleasently surprised: as the
title suggests, it is definitely the most comprehensive compilation of
SQL techiques for modelling trees in relational structures all
gathered in one place that I ever saw. Well worth the price,thanks for
putting it together!
jcelko212@.earthlink.net (--CELKO--) wrote in message news:<18c7b3c2.0407010727.4fbb407d@.posting.google.com>...
> Here is the link on Amazon.com for my new book on "Trees & Hierarchies
> in SQL"
> http://www.amazon.com/exec/obidos/t...product-details
> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
> The classic scenario calls for a root class with all the common
> attributes and then specialized sub-classes under it. As an example,
> let's take the class of Vehicles and find an industry standard
> identifier (VIN), and add two mutually exclusive sub-classes, Sport
> utility vehicles and sedans ('SUV', 'SED').
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> UNIQUE (vin, vehicle_type),
> ..);
> Notice the overlapping candidate keys. I then use a compound candidate
> key (vin, vehicle_type) and a constraint in each sub-class table to
> assure that the vehicle_type is locked and agrees with the Vehicles
> table. Add some DRI actions and you are done:
> CREATE TABLE SUV
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
> CHECK(vehicle_type = 'SUV'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type = 'SED'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> I can continue to build a hierarchy like this. For example, if I had
> a Sedans table that broke down into two-door and four-door sedans, I
> could a schema like this:
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE TwoDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
> CHECK(vehicle_type = '2DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE FourDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
> CHECK(vehicle_type = '4DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans (vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> The idea is to build a chain of identifiers and types in a UNIQUE()
> constraint that go up the tree when you use a REFERENCES constraint.
> Obviously, you can do variants of this trick to get different class
> structures.
> If an entity doesn't have to be exclusively one subtype, you play with
> the root of the class hierarchy:
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> PRIMARY KEY (vin, vehicle_type),
> ..);
> Now start hiding all this stuff in VIEWs immediately and add an
> INSTEAD OF trigger to those VIEWs.|||> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
CELKO,
Just wanted to double check: by "separating the nodes from the tree"
do you mean doing this (using path enumeration model):
nodes (id int, name varchar, ... etc)
nodetree (path varchar, int nodeId)
AS OPPOSED TO THIS:
nodes (id int, name varchar, path varchar, ... etc)
Thanks again!
Jeff
jcelko212@.earthlink.net (--CELKO--) wrote in message news:<18c7b3c2.0407010727.4fbb407d@.posting.google.com>...
> Here is the link on Amazon.com for my new book on "Trees & Hierarchies
> in SQL"
> http://www.amazon.com/exec/obidos/t...product-details
> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
> The classic scenario calls for a root class with all the common
> attributes and then specialized sub-classes under it. As an example,
> let's take the class of Vehicles and find an industry standard
> identifier (VIN), and add two mutually exclusive sub-classes, Sport
> utility vehicles and sedans ('SUV', 'SED').
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> UNIQUE (vin, vehicle_type),
> ..);
> Notice the overlapping candidate keys. I then use a compound candidate
> key (vin, vehicle_type) and a constraint in each sub-class table to
> assure that the vehicle_type is locked and agrees with the Vehicles
> table. Add some DRI actions and you are done:
> CREATE TABLE SUV
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
> CHECK(vehicle_type = 'SUV'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type = 'SED'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> I can continue to build a hierarchy like this. For example, if I had
> a Sedans table that broke down into two-door and four-door sedans, I
> could a schema like this:
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE TwoDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
> CHECK(vehicle_type = '2DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE FourDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
> CHECK(vehicle_type = '4DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans (vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> The idea is to build a chain of identifiers and types in a UNIQUE()
> constraint that go up the tree when you use a REFERENCES constraint.
> Obviously, you can do variants of this trick to get different class
> structures.
> If an entity doesn't have to be exclusively one subtype, you play with
> the root of the class hierarchy:
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> PRIMARY KEY (vin, vehicle_type),
> ..);
> Now start hiding all this stuff in VIEWs immediately and add an
> INSTEAD OF trigger to those VIEWs.|||>> in reality I have about 12 entities with which both
organizations and users can have relationships so I have to have 24 link
tables. Nothing wrong with that per se but I just think it is inelegant.
<<
Well, sometimes the model is complex. If those relationships are all
different, then you need to have table for each relationship.
>> I did not have high expectations because I thought it was just a
quickie to capitalize on success of "SQL for smarties" but I was
pleasantly surprised: as the title suggests, it is definitely the most
comprehensive compilation of SQL techiques for modelling trees in
relational structures all gathered in one place that I ever saw. <<
Thank you! Where were you when I needed jacket copy?
--CELKO--
===========================
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are.
*** Sent via Devdex http://www.devdex.com ***
Don't just participate in USENET...get rewarded for it!|||>> Just wanted to double check: by "separating the nodes from the
tree" do you mean doing this (using path enumeration model): <<
Where is the DDL? What you posted was useless on several levels.
NEVER use something as vague as "id" for a column name. NEVER, NEVER,
NEVER name a data element for its current location -- node_id always a
node_id wherever it appears. This is fundamental data modeling --
name thing s for what they are in the logical model, not for how or
where they are PHYSICALLY stored or how they are used in one place.
I think that you might have meant this:
CREATE TABLE Nodes
(node_id INTEGER NOT NULL PRIMARY KEY,
node_name CHAR(20) NOT NULL,
... );
CREATE TABLE Tree
(node_id INTEGER NOT NULL
REFERENCES Nodes (node_id)
ON UPDATE CASCADE
ON DELETE CASCADE,
path VARCHAR(100) NOT NULL
CHECK (<< valid path predicate >>),
.. );
Now you need to decide how to handle DRI actions; I made a guess. And
how you want to build the path string - letters, digits, fixed or
variable substring components, etc.|||--CELKO-- wrote:
> Here is the link on Amazon.com for my new book on "Trees & Hierarchies
> in SQL"
> http://www.amazon.com/exec/obidos/t...product-details
> Separate the tree structure from the nodes. Ilike the nested sets
> model for the structure, but you can pick whatever works best for your
> situation. Then the nodes can go into another table.
> The classic scenario calls for a root class with all the common
> attributes and then specialized sub-classes under it. As an example,
> let's take the class of Vehicles and find an industry standard
> identifier (VIN), and add two mutually exclusive sub-classes, Sport
> utility vehicles and sedans ('SUV', 'SED').
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> UNIQUE (vin, vehicle_type),
> ..);
> Notice the overlapping candidate keys. I then use a compound candidate
> key (vin, vehicle_type) and a constraint in each sub-class table to
> assure that the vehicle_type is locked and agrees with the Vehicles
> table. Add some DRI actions and you are done:
> CREATE TABLE SUV
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
> CHECK(vehicle_type = 'SUV'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type = 'SED'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> I can continue to build a hierarchy like this. For example, if I had
> a Sedans table that broke down into two-door and four-door sedans, I
> could a schema like this:
> CREATE TABLE Sedans
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Vehicles(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE TwoDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
> CHECK(vehicle_type = '2DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans(vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> CREATE TABLE FourDoor
> (vin CHAR(17) NOT NULL PRIMARY KEY,
> vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
> CHECK(vehicle_type = '4DR'),
> UNIQUE (vin, vehicle_type),
> FOREIGN KEY (vin, vehicle_type)
> REFERENCES Sedans (vin, vehicle_type)
> ON UPDATE CASCADE
> ON DELETE CASCADE,
> ..);
> The idea is to build a chain of identifiers and types in a UNIQUE()
> constraint that go up the tree when you use a REFERENCES constraint.
> Obviously, you can do variants of this trick to get different class
> structures.
> If an entity doesn't have to be exclusively one subtype, you play with
> the root of the class hierarchy:
> CREATE TABLE Vehicles
> (vin CHAR(17) NOT NULL,
> vehicle_type CHAR(3) NOT NULL
> CHECK(vehicle_type IN ('SUV', 'SED')),
> PRIMARY KEY (vin, vehicle_type),
> ..);
> Now start hiding all this stuff in VIEWs immediately and add an
> INSTEAD OF trigger to those VIEWs.
Joe ... sorry ... but integrity demands that I write the following:
We have a usenet group named comp.databases.oracle.marketplace
specifically designated for promotions. In the future it would be
appreciated if you posted book, or any other, promotions there.
Thanks.
For everyone else ... I recommend Joe's books to my students and
highly recommend them.
--
Daniel Morgan
http://www.outreach.washington.edu/...oad/oad_crs.asp
http://www.outreach.washington.edu/...aoa/aoa_crs.asp
damorgan@.x.washington.edu
(replace 'x' with a 'u' to reply)|||"Daniel Morgan" <damorgan@.x.washington.edu> wrote in message news:1088831855.922891@.yasure...
> Joe ... sorry ... but integrity demands that I write the following:
> We have a usenet group named comp.databases.oracle.marketplace
> specifically designated for promotions. In the future it would be
> appreciated if you posted book, or any other, promotions there.
Uh, so a guy posts two lines with a link where you can buy his
book, and 100 lines of helping someone out with his question,
and you bust his balls for "promotions?" You also assume
that because the original poster included an oracle newsgroup
in one of the four newsgroups he posted to, that everyone responding
on any of those newsgroups should somehow know your local
culture and customs and adhere to them?
You have a funny idea of integrity.
Marshall|||Marshall Spight wrote:
> "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message news:1088831855.922891@.yasure...
>>Joe ... sorry ... but integrity demands that I write the following:
>>
>>We have a usenet group named comp.databases.oracle.marketplace
>>specifically designated for promotions. In the future it would be
>>appreciated if you posted book, or any other, promotions there.
>
> Uh, so a guy posts two lines with a link where you can buy his
> book, and 100 lines of helping someone out with his question,
> and you bust his balls for "promotions?" You also assume
> that because the original poster included an oracle newsgroup
> in one of the four newsgroups he posted to, that everyone responding
> on any of those newsgroups should somehow know your local
> culture and customs and adhere to them?
> You have a funny idea of integrity.
>
> Marshall
I don't show favoritism when it come to calling spam spam. Even when it
is someone as esteemed as Joe Celko.
But exactly what is it you have contributed to c.d.o.server?
--
Daniel Morgan
http://www.outreach.washington.edu/...oad/oad_crs.asp
http://www.outreach.washington.edu/...aoa/aoa_crs.asp
damorgan@.x.washington.edu
(replace 'x' with a 'u' to reply)|||Daniel Morgan <damorgan@.x.washington.edu> wrote:
>Marshall Spight wrote:
>> "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message news:1088831855.922891@.yasure...
>>
>>>Joe ... sorry ... but integrity demands that I write the following:
>>>
>>>We have a usenet group named comp.databases.oracle.marketplace
>>>specifically designated for promotions. In the future it would be
>>>appreciated if you posted book, or any other, promotions there.
>> Uh, so a guy posts two lines with a link where you can buy his
>> book, and 100 lines of helping someone out with his question,
>> and you bust his balls for "promotions?" You also assume
>> that because the original poster included an oracle newsgroup
>> in one of the four newsgroups he posted to, that everyone responding
>> on any of those newsgroups should somehow know your local
>> culture and customs and adhere to them?
>>
>> You have a funny idea of integrity.
>I don't show favoritism when it come to calling spam spam. Even when it
>is someone as esteemed as Joe Celko.
What spam? OP asked for help, and Mr. Celko gave it.
>But exactly what is it you have contributed to c.d.o.server?
Possibly a reminder of priorities? You may have missed this point
though.
Sincerely,
Gene Wirchenko
Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.|||"Daniel Morgan" <damorgan@.x.washington.edu> wrote in message news:1088887202.344807@.yasure...
> I don't show favoritism when it come to calling spam spam. Even when it
> is someone as esteemed as Joe Celko.
Spam is by definition unsolicited. An on-topic, helpful response to a
usenet post requesting help is by definition solicited. Your "integrity"
is just ball-busting on a guy who bends over backwards to help all
comers for free, and also happens to sell some books for which
he's probably lucky to clear $25,000 annually. (Trees and
Hierarchies in SQL doesn't have the same cachet as the latest
work by David Sedaris. In fact, it's Amazon's 16,754's most
popular item, right behind the $350 "Suunto Yachtsman Wristop
Computer Watch w/ Barometer and Compass", and you can
bet a $350 wrist barometer isn't exactly flying off the shelves.)
You're not engaging in spam-vigilance; you're just engaging
in gratuitous anticommercialism.
> But exactly what is it you have contributed to c.d.o.server?
I could just as well ask what you've contributed to c.d.theory,
but that wasn't the point. The point was that this thread isn't
specific to your newsgroup; it spans multiple newsgroups.
Expecting someone on this thread to conform to the customs
of one of those newsgroups is as unrealistic as those folks in
Snakewater, Iowa, who get upset when they see something
that violates a local ordinance posted on the internet from
New York City.
But the absolute frosting on the cake, the hypocritical parsley
on the potatoes, is the fact that both of your posts on this
topic so far have included links to courses you teach at
Washington. You're giving those courses away for free,
I presume? Oh, no, wait, I see; Oracle Application Development
is $1875. At least Joe's links to his book on Amazon were
pertinent to the OP's question.
Marshall|||Daniel Morgan <damorgan@.x.washington.edu> wrote in message news:<1088831855.922891@.yasure>...
> --CELKO-- wrote:
> > Here is the link on Amazon.com for my new book on "Trees & Hierarchies
> > in SQL"
> > http://www.amazon.com/exec/obidos/t...product-details
> > Separate the tree structure from the nodes. Ilike the nested sets
> > model for the structure, but you can pick whatever works best for your
> > situation. Then the nodes can go into another table.
> > The classic scenario calls for a root class with all the common
> > attributes and then specialized sub-classes under it. As an example,
> > let's take the class of Vehicles and find an industry standard
> > identifier (VIN), and add two mutually exclusive sub-classes, Sport
> > utility vehicles and sedans ('SUV', 'SED').
> > CREATE TABLE Vehicles
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) NOT NULL
> > CHECK(vehicle_type IN ('SUV', 'SED')),
> > UNIQUE (vin, vehicle_type),
> > ..);
> > Notice the overlapping candidate keys. I then use a compound candidate
> > key (vin, vehicle_type) and a constraint in each sub-class table to
> > assure that the vehicle_type is locked and agrees with the Vehicles
> > table. Add some DRI actions and you are done:
> > CREATE TABLE SUV
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
> > CHECK(vehicle_type = 'SUV'),
> > UNIQUE (vin, vehicle_type),
> > FOREIGN KEY (vin, vehicle_type)
> > REFERENCES Vehicles(vin, vehicle_type)
> > ON UPDATE CASCADE
> > ON DELETE CASCADE,
> > ..);
> > CREATE TABLE Sedans
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> > CHECK(vehicle_type = 'SED'),
> > UNIQUE (vin, vehicle_type),
> > FOREIGN KEY (vin, vehicle_type)
> > REFERENCES Vehicles(vin, vehicle_type)
> > ON UPDATE CASCADE
> > ON DELETE CASCADE,
> > ..);
> > I can continue to build a hierarchy like this. For example, if I had
> > a Sedans table that broke down into two-door and four-door sedans, I
> > could a schema like this:
> > CREATE TABLE Sedans
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
> > CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
> > UNIQUE (vin, vehicle_type),
> > FOREIGN KEY (vin, vehicle_type)
> > REFERENCES Vehicles(vin, vehicle_type)
> > ON UPDATE CASCADE
> > ON DELETE CASCADE,
> > ..);
> > CREATE TABLE TwoDoor
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
> > CHECK(vehicle_type = '2DR'),
> > UNIQUE (vin, vehicle_type),
> > FOREIGN KEY (vin, vehicle_type)
> > REFERENCES Sedans(vin, vehicle_type)
> > ON UPDATE CASCADE
> > ON DELETE CASCADE,
> > ..);
> > CREATE TABLE FourDoor
> > (vin CHAR(17) NOT NULL PRIMARY KEY,
> > vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
> > CHECK(vehicle_type = '4DR'),
> > UNIQUE (vin, vehicle_type),
> > FOREIGN KEY (vin, vehicle_type)
> > REFERENCES Sedans (vin, vehicle_type)
> > ON UPDATE CASCADE
> > ON DELETE CASCADE,
> > ..);
> > The idea is to build a chain of identifiers and types in a UNIQUE()
> > constraint that go up the tree when you use a REFERENCES constraint.
> > Obviously, you can do variants of this trick to get different class
> > structures.
> > If an entity doesn't have to be exclusively one subtype, you play with
> > the root of the class hierarchy:
> > CREATE TABLE Vehicles
> > (vin CHAR(17) NOT NULL,
> > vehicle_type CHAR(3) NOT NULL
> > CHECK(vehicle_type IN ('SUV', 'SED')),
> > PRIMARY KEY (vin, vehicle_type),
> > ..);
> > Now start hiding all this stuff in VIEWs immediately and add an
> > INSTEAD OF trigger to those VIEWs.
> Joe ... sorry ... but integrity demands that I write the following:
> We have a usenet group named comp.databases.oracle.marketplace
> specifically designated for promotions. In the future it would be
> appreciated if you posted book, or any other, promotions there.
> Thanks.
> For everyone else ... I recommend Joe's books to my students and
> highly recommend them.
Daniel Morgan: Cannot you just shut the ffff up? Take your
contributions and shove it! This an international newsgroup and it NOT
yours to guard. The charter of this newsgroup is clear and does not
allow you to add shameless lines of advertisements while denying
others!
So shut up! you can create your own newsgroup where you can control
things -- you crazy control freak. Watch the movie "Tow Truck" to see
what you look like.
Obviously you are downgrading this group and many others to your level
of integrity which you completely lack.
Remember, you big jumble of Q that Oracle itself is a product! and you
have no rights what so ever to determine what is good; and what is
like you of what you don't like. You get it?! Read Simon's peom! Or
download OMLET and run it and you would get random lines of insults
and images geared specifically to YOU! Till 2010.
Now what?!
Right on your tail.|||omlet@.omlet.org apparently said,on my timestamp of 5/07/2004 8:27 AM:
>
> Daniel Morgan: Cannot you just shut the ffff up? Take your
> contributions and shove it! This an international newsgroup and it NOT
> yours to guard. The charter of this newsgroup is clear and does not
> allow you to add shameless lines of advertisements while denying
> others!
> So shut up! you can create your own newsgroup where you can control
> things -- you crazy control freak. Watch the movie "Tow Truck" to see
> what you look like.
> Obviously you are downgrading this group and many others to your level
> of integrity which you completely lack.
> Remember, you big jumble of Q that Oracle itself is a product! and you
> have no rights what so ever to determine what is good; and what is
> like you of what you don't like. You get it?! Read Simon's peom! Or
> download OMLET and run it and you would get random lines of insults
> and images geared specifically to YOU! Till 2010.
> Now what?!
> Right on your tail.
forwarded to abuse.
--
Cheers
Nuno Souto
wizofoz2k@.yahoo.com.au.nospam|||Noons wrote:
> forwarded to abuse.
I have too as have a number of my students.
Here's what we have found:
Pinging omlet.org returns 66.218.79.158 which in return, when pinged,
returns p2w2.geo.scd.yahoo.com.
Complaints have been filed with:
abuse@.yahoo.com
groups-abuse@.google.com
I encourage others to do so too and and in case anyone wonders who this
spam-troll really is ...
Domain Name:OMLET.ORG
Created On:15-Jun-2004 10:54:38 UTC ... (been in business only 2 weeks)
Sponsoring Registrar:R52-LROR
Registrant Name:Amjad Daoud
Registrant Street1:Queen Noor Br 07
Registrant City:Amman
Registrant State/Province:Amman
Registrant Postal Code:9626
Registrant Country:JO ... (this is the country of Jordan)
Registrant Phone:+1.96265163864
Registrant Email:teraknowledgesystems@.yahoo.com
Yes our troll lives in the country of Jordan. You can purchase a
product from someone to use with your valuable Oracle database
where there is not only no company behind it there is just one
guy in a third-world country.
Please address complaints about his spamming and abuse to:
Tech Name:YahooDomains TechContact
Tech Organization:Yahoo! Inc
Tech Street1:701 First Ave.
Tech City:Sunnyvale
Tech State/Province:CA
Tech Postal Code:94089
Tech Country:US
Tech Phone:+1.619-881-3096
Tech Email:domain.tech@.YAHOO-INC.COM
Name Server:YNS1.YAHOO.COM
Name Server:YNS2.YAHOO.COM
Thank you.
--
Daniel Morgan
http://www.outreach.washington.edu/...oad/oad_crs.asp
http://www.outreach.washington.edu/...aoa/aoa_crs.asp
damorgan@.x.washington.edu
(replace 'x' with a 'u' to reply)|||**** Post for FREE via your newsreader at post.usenet.com ****
"Daniel Morgan" <damorgan@.x.washington.edu> wrote in message
news:1089057907.319050@.yasure...
> Here's what we have found:
> I encourage others to do so too and and in case anyone wonders who this
> spam-troll really is ...
> Domain Name:OMLET.ORG
> Created On:15-Jun-2004 10:54:38 UTC ... (been in business only 2 weeks)
> Sponsoring Registrar:R52-LROR
> Registrant Name:Amjad Daoud
> Registrant Street1:Queen Noor Br 07
> Registrant City:Amman
> Registrant State/Province:Amman
> Registrant Postal Code:9626
> Registrant Country:JO ... (this is the country of Jordan)
> Registrant Phone:+1.96265163864
> Registrant Email:teraknowledgesystems@.yahoo.com
> Yes our troll lives in the country of Jordan. You can purchase a
> product from someone to use with your valuable Oracle database
> where there is not only no company behind it there is just one
> guy in a third-world country.
You need to be more careful about this kind of language.
One of Amjad Daoud signatures I found on the web is:
Amjad Daoud
The OMLET Team Lead
Tera Knowledge Systems, Inc.
Arlington, Texas <----- This is not Jordan
Besides: What do you have against "guys from third-world countries" ?
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
*** Usenet.com - The #1 Usenet Newsgroup Service on The Planet! ***
http://www.usenet.com
Unlimited Download - 19 Seperate Servers - 90,000 groups - Uncensored
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=|||As the original poster I would agree here. If it is solicited and
relevant to the question asked it's not spam. I actually did end up
buying Mr. Celko's book and can attest that it is quite useful for
solving the class of problems I was interested in.
While in general people should not post ads for books they wrote on
tech newsgroups an exception should definitely be made for cases like
this one where the book contains pretty much the exact answer.
- Jeff
"Marshall Spight" <mspight@.dnai.com> wrote in message news:<8KNFc.20603$%_6.12831@.attbi_s01>...
> "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message news:1088887202.344807@.yasure...
> > I don't show favoritism when it come to calling spam spam. Even when it
> > is someone as esteemed as Joe Celko.
> Spam is by definition unsolicited. An on-topic, helpful response to a
> usenet post requesting help is by definition solicited. Your "integrity"
> is just ball-busting on a guy who bends over backwards to help all
> comers for free, and also happens to sell some books for which
> he's probably lucky to clear $25,000 annually. (Trees and
> Hierarchies in SQL doesn't have the same cachet as the latest
> work by David Sedaris. In fact, it's Amazon's 16,754's most
> popular item, right behind the $350 "Suunto Yachtsman Wristop
> Computer Watch w/ Barometer and Compass", and you can
> bet a $350 wrist barometer isn't exactly flying off the shelves.)
> You're not engaging in spam-vigilance; you're just engaging
> in gratuitous anticommercialism.
>
> > But exactly what is it you have contributed to c.d.o.server?
> I could just as well ask what you've contributed to c.d.theory,
> but that wasn't the point. The point was that this thread isn't
> specific to your newsgroup; it spans multiple newsgroups.
> Expecting someone on this thread to conform to the customs
> of one of those newsgroups is as unrealistic as those folks in
> Snakewater, Iowa, who get upset when they see something
> that violates a local ordinance posted on the internet from
> New York City.
> But the absolute frosting on the cake, the hypocritical parsley
> on the potatoes, is the fact that both of your posts on this
> topic so far have included links to courses you teach at
> Washington. You're giving those courses away for free,
> I presume? Oh, no, wait, I see; Oracle Application Development
> is $1875. At least Joe's links to his book on Amazon were
> pertinent to the OP's question.
>
> Marshall|||"Marshall Spight" <mspight@.dnai.com> wrote in message
news:8KNFc.20603$%_6.12831@.attbi_s01...
> ...Trees and
> Hierarchies in SQL doesn't have the same cachet as the latest
> work by David Sedaris. In fact, it's Amazon's 16,754's most
> popular item, right behind the $350 "Suunto Yachtsman Wristop
> Computer Watch w/ Barometer and Compass...
How did you query this? Does amazon provides SQL interface nowadays?
select book.* from books
where rank between 15000 and 16000
?|||"x" <x-false@.yahoo.com> wrote in message news:<40ea4378@.post.usenet.com>...
> **** Post for FREE via your newsreader at post.usenet.com ****
>
> "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message
> news:1089057907.319050@.yasure...
> > Here's what we have found:
> > I encourage others to do so too and and in case anyone wonders who this
> > spam-troll really is ...
> > Domain Name:OMLET.ORG
> > Created On:15-Jun-2004 10:54:38 UTC ... (been in business only 2 weeks)
> > Sponsoring Registrar:R52-LROR
> > Registrant Name:Amjad Daoud
> > Registrant Street1:Queen Noor Br 07
> > Registrant City:Amman
> > Registrant State/Province:Amman
> > Registrant Postal Code:9626
> > Registrant Country:JO ... (this is the country of Jordan)
> > Registrant Phone:+1.96265163864
> > Registrant Email:teraknowledgesystems@.yahoo.com
> > Yes our troll lives in the country of Jordan. You can purchase a
> > product from someone to use with your valuable Oracle database
> > where there is not only no company behind it there is just one
> > guy in a third-world country.
> You need to be more careful about this kind of language.
> One of Amjad Daoud signatures I found on the web is:
> Amjad Daoud
> The OMLET Team Lead
> Tera Knowledge Systems, Inc.
> Arlington, Texas <----- This is not Jordan
> Besides: What do you have against "guys from third-world countries" ?
>
He doesn't. Daniel, I and others have just been dealing with this
troll in the ORACLE groups for a while now. And you should note that
you presented the company address, which may be only a sales office
and not the location of the "OMLET Team". It certainly is different
from the residence of this guy.
Calling Amjad a troll is certainly MUCH more polite than the crap he
has spewed in the ORACLE groups.|||"x" <x-false@.yahoo.com> wrote in message news:<40ea4378@.post.usenet.com>...
> **** Post for FREE via your newsreader at post.usenet.com ****
>
> "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message
> news:1089057907.319050@.yasure...
> > Here's what we have found:
> > I encourage others to do so too and and in case anyone wonders who this
> > spam-troll really is ...
> > Domain Name:OMLET.ORG
> > Created On:15-Jun-2004 10:54:38 UTC ... (been in business only 2 weeks)
> > Sponsoring Registrar:R52-LROR
> > Registrant Name:Amjad Daoud
> > Registrant Street1:Queen Noor Br 07
> > Registrant City:Amman
> > Registrant State/Province:Amman
> > Registrant Postal Code:9626
> > Registrant Country:JO ... (this is the country of Jordan)
> > Registrant Phone:+1.96265163864
> > Registrant Email:teraknowledgesystems@.yahoo.com
> > Yes our troll lives in the country of Jordan. You can purchase a
> > product from someone to use with your valuable Oracle database
> > where there is not only no company behind it there is just one
> > guy in a third-world country.
> You need to be more careful about this kind of language.
> One of Amjad Daoud signatures I found on the web is:
> Amjad Daoud
> The OMLET Team Lead
> Tera Knowledge Systems, Inc.
> Arlington, Texas <----- This is not Jordan
But check out eggy-weggies email, it's Jordanian. Not that that means
anything, I certainly am not in Tonga or oz as some of my web presence
may imply. But he says Sunnyvale, and Texas, and the registration is
on Queen Noor's... something. All-in-all, not a situation a rational
person would want to take advice or a product from. And wassup with
the 619 area code on the Sunnyvale provider, a cell?
> Besides: What do you have against "guys from third-world countries" ?
There are matters of security, economics, legal liability and trust
involved.
But besides that, rotten egg seems to have gone well into
net-psychoses, cyberstalking anyone who he perceives as attacking him,
which is pretty much anyone acknowledging his existence. Not a pretty
sight. DM has once again made a fast, correct call about a spammer.
As far as celko, I thought DM was suitably polite in explaining the
plug issue. It was a minor gaffe on celko's part, perfectly
understandable, DM didn't "break his balls." I would say DM should
have ignored it giving the crossposting, but he clearly is making an
effort to be fair and non-partisan when netcopping. My bias is
towards letting people plug their work amid a useful answer, but DM
has almost convinced me his way is better.
jg
--
@.home.com is bogus.
Internet meltdown, news at 11:
http://catless.ncl.ac.uk/Risks/23.44.html#subj1|||jlanfield2003@.yahoo.com (Jeff Lanfield) wrote:
>As the original poster I would agree here. If it is solicited and
>relevant to the question asked it's not spam. I actually did end up
>buying Mr. Celko's book and can attest that it is quite useful for
>solving the class of problems I was interested in.
>While in general people should not post ads for books they wrote on
>tech newsgroups an exception should definitely be made for cases like
>this one where the book contains pretty much the exact answer.
Exactly.
I am glad you got the help you needed. That is a large part of
what technical newsgroups are for.
[snip]
Sincerely,
Gene Wirchenko
Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.|||"Mikito Harakiri" <mikharakiri@.iahu.com> wrote in message news:wIAGc.12$ZJ2.250@.news.oracle.com...
> "Marshall Spight" <mspight@.dnai.com> wrote in message
> news:8KNFc.20603$%_6.12831@.attbi_s01...
> > ...Trees and
> > Hierarchies in SQL doesn't have the same cachet as the latest
> > work by David Sedaris. In fact, it's Amazon's 16,754's most
> > popular item, right behind the $350 "Suunto Yachtsman Wristop
> > Computer Watch w/ Barometer and Compass...
> How did you query this? Does amazon provides SQL interface nowadays?
Methodology:
Click on the link provided by Mr. Celko.
Scroll down to the "product details" section.
Read the sales rank there (note that it's changed since then.)
--> "Amazon.com sales rank: 16,754"
subtract one :-)
--> 16,753
Go to www.google.com
search exactly as follows:
site:amazon.com "sales rank: 16753"
Click on the link
Often when a site doesn't provide some bit of
info directly, there's a way to get to it with
judicious use of google features.
For example, if you want to know if a given
32 bit number is prime, often times the fastest
way to decide is to google for it. If it's prime,
it'll be on some indexed page of prime numbers
somewhere.
Marshall|||**** Post for FREE via your newsreader at post.usenet.com ****
"Ed prochak" <ed.prochak@.magicinterface.com> wrote in message
news:4b5394b2.0407061035.3fac9628@.posting.google.c om...
> "x" <x-false@.yahoo.com> wrote in message
news:<40ea4378@.post.usenet.com>...
> > "Daniel Morgan" <damorgan@.x.washington.edu> wrote in message
> > news:1089057907.319050@.yasure...
> > > Here's what we have found:
> > > I encourage others to do so too and and in case anyone wonders who
this
> > > spam-troll really is ...
> > > Domain Name:OMLET.ORG
> > > Created On:15-Jun-2004 10:54:38 UTC ... (been in business only 2
weeks)
> > > Sponsoring Registrar:R52-LROR
> > > Registrant Name:Amjad Daoud
> > > Registrant Street1:Queen Noor Br 07
> > > Registrant City:Amman
> > > Registrant State/Province:Amman
> > > Registrant Postal Code:9626
> > > Registrant Country:JO ... (this is the country of Jordan)
> > > Registrant Phone:+1.96265163864
> > > Registrant Email:teraknowledgesystems@.yahoo.com
> > > Yes our troll lives in the country of Jordan. You can purchase a
> > > product from someone to use with your valuable Oracle database
> > > where there is not only no company behind it there is just one
> > > guy in a third-world country.
> > You need to be more careful about this kind of language.
> > One of Amjad Daoud signatures I found on the web is:
> > Amjad Daoud
> > The OMLET Team Lead
> > Tera Knowledge Systems, Inc.
> > Arlington, Texas <----- This is not Jordan
> > Besides: What do you have against "guys from third-world countries" ?
> He doesn't.
Ok. My mistake. English is not my native tongue. :-)
>Daniel, I and others have just been dealing with this
> troll in the ORACLE groups for a while now.
I didn't knew that.
>And you should note that
> you presented the company address, which may be only a sales office
> and not the location of the "OMLET Team". It certainly is different
> from the residence of this guy.
I don't know if there is a company or a sale office there.
But the address is from 2001 and looks like a company address from U.S.A.
If one does business with a U.S.A. company, I think the U.S.A. and
international laws apply.
It is not relevant that the "OMLET Team" is just one guy from the country of
Jordan, regardless if this country is a "third world country" or not.
> Calling Amjad a troll is certainly MUCH more polite than the crap he
> has spewed in the ORACLE groups.
Probably he has done that when provoked.
Calling Jordan a third world country has nothing to do with what Amjad has
done.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
*** Usenet.com - The #1 Usenet Newsgroup Service on The Planet! ***
http://www.usenet.com
Unlimited Download - 19 Seperate Servers - 90,000 groups - Uncensored
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=|||**** Post for FREE via your newsreader at post.usenet.com ****
"Joel Garry" <joel-garry@.home.com> wrote in message
news:91884734.0407061442.5a606c44@.posting.google.c om...
> But check out eggy-weggies email, it's Jordanian. Not that that means
> anything, I certainly am not in Tonga or oz as some of my web presence
> may imply. But he says Sunnyvale, and Texas, and the registration is
> on Queen Noor's... something. All-in-all, not a situation a rational
> person would want to take advice or a product from. And wassup with
> the 619 area code on the Sunnyvale provider, a cell?
Exactly. It doesn't mean anything.
If you want to pay him, then you may ask for a company address and
warranties.
> > Besides: What do you have against "guys from third-world countries" ?
> There are matters of security, economics, legal liability and trust
> involved.
This matters are involved anyway.
> But besides that, rotten egg seems to have gone well into
> net-psychoses, cyberstalking anyone who he perceives as attacking him,
> which is pretty much anyone acknowledging his existence. Not a pretty
> sight. DM has once again made a fast, correct call about a spammer.
You could show him some compassion then.
> As far as celko, I thought DM was suitably polite in explaining the
> plug issue. It was a minor gaffe on celko's part, perfectly
> understandable, DM didn't "break his balls." I would say DM should
> have ignored it giving the crossposting, but he clearly is making an
> effort to be fair and non-partisan when netcopping. My bias is
> towards letting people plug their work amid a useful answer, but DM
> has almost convinced me his way is better.
Every newsgroup has his trolls :-)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
*** Usenet.com - The #1 Usenet Newsgroup Service on The Planet! ***
http://www.usenet.com
Unlimited Download - 19 Seperate Servers - 90,000 groups - Uncensored
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=|||"Marshall Spight" <mspight@.dnai.com> wrote in message
news:8KIGc.34726$a24.6231@.attbi_s03...
> Methodology:
> Click on the link provided by Mr. Celko.
> Scroll down to the "product details" section.
> Read the sales rank there (note that it's changed since then.)
> --> "Amazon.com sales rank: 16,754"
> subtract one :-)
> --> 16,753
> Go to www.google.com
> search exactly as follows:
> site:amazon.com "sales rank: 16753"
> Click on the link
> Often when a site doesn't provide some bit of
> info directly, there's a way to get to it with
> judicious use of google features.
> For example, if you want to know if a given
> 32 bit number is prime, often times the fastest
> way to decide is to google for it. If it's prime,
> it'll be on some indexed page of prime numbers
> somewhere.
Very clever. Now, I wouldn't be surprised if you invent how to do
aggregation and joins via google.|||"x" <x-false@.yahoo.com> wrote in message news:<40eba4a3$1@.post.usenet.com>...
> **** Post for FREE via your newsreader at post.usenet.com ****
>
> "Joel Garry" <joel-garry@.home.com> wrote in message
> news:91884734.0407061442.5a606c44@.posting.google.c om...
> > But check out eggy-weggies email, it's Jordanian. Not that that means
> > anything, I certainly am not in Tonga or oz as some of my web presence
> > may imply. But he says Sunnyvale, and Texas, and the registration is
> > on Queen Noor's... something. All-in-all, not a situation a rational
> > person would want to take advice or a product from. And wassup with
> > the 619 area code on the Sunnyvale provider, a cell?
> Exactly. It doesn't mean anything.
> If you want to pay him, then you may ask for a company address and
> warranties.
What I want is for him to stop spamming this group. How might that
happen?
> > > Besides: What do you have against "guys from third-world countries" ?
> > There are matters of security, economics, legal liability and trust
> > involved.
> This matters are involved anyway.
They are more involved when third-world countries are mixed in. I
deal with many issues daily, living near one. There is a huge
cultural issue, too, especially having to do with dealing with
government regulations and legalities.
> > But besides that, rotten egg seems to have gone well into
> > net-psychoses, cyberstalking anyone who he perceives as attacking him,
> > which is pretty much anyone acknowledging his existence. Not a pretty
> > sight. DM has once again made a fast, correct call about a spammer.
> You could show him some compassion then.
My wife's job is to show compassion. But she doesn't post on usenet.
In any unmoderated usenet group, there needs to be feedback and some
consensus about what is appropriate for the group. This group has
long taken a fairly strict no-spam stance, including creating an
appropriate group for marketing. The reason for this stance comes
from the experience of longtime posters who have seen groups get wiped
out from not taking such a strict stance. Another aspect of this
group is that it takes a critical view of postings - this is a good
thing, dba work must be precise and correct or it is bad dba work.
I think more compassion could be shown newbies, directing them towards
proper research and docs, as opposed to harsh criticism, but that's
just me, and I try to convince others of that by example rather than
argument. Unfortunately, it doesn't seem to work as well as DM's way
with newbie spammers. net.kooks do not need compassion, in fact, that
may be just the wrong thing to do, since any iota of possibility that
what they are doing isn't wrong will be rationalized into
justification.
> > As far as celko, I thought DM was suitably polite in explaining the
> > plug issue. It was a minor gaffe on celko's part, perfectly
> > understandable, DM didn't "break his balls." I would say DM should
> > have ignored it giving the crossposting, but he clearly is making an
> > effort to be fair and non-partisan when netcopping. My bias is
> > towards letting people plug their work amid a useful answer, but DM
> > has almost convinced me his way is better.
> Every newsgroup has his trolls :-)
I don't think DM has any trolls eggy-weggy, or x-false, or whomever
you might be today.
jg
--
@.home.com is bogus.
"Eggy-wegs! I'd like to crush 'em!" - Clockwork Orange