Showing posts with label varchar. Show all posts
Showing posts with label varchar. Show all posts

Thursday, March 29, 2012

Concatenate int & var char - SQL

Hi,

I am trying to write some simple SQL to join two fields within a table, the primary key is an int and the other field is a varchar.

But i am receiving the error:

'Conversion failed when converting the varchar value ',' to data type int.

The SQL I am trying to use is:

select game_no + ',' + team_name as match
from result

Thanks

As the error message says you can concatenate similar datatype values and use CONVERT or CAST functions otherwise to make them simialr.

selectConvert(Varchar,game_no)+','+ team_nameas match

from result

|||

Yep. Why the parser is so stupid that it assumes the presence of one number means all other exprssions must evaluate to a number, rather than the other way around, is a mystery to me.

Concatenate All values to one select

I'm trying to get all the values in one field
i.e from Northwind
Select ShipName from orders
where ShippedDate = Cast(GetDate() as Varchar(11)
I need the output to be like as one value with char(13) + char(10)
Vins et als Chevalier
Toms Spezialitten
Hanari Carnes
Victuailles en stock
Suprmes dlices
Hanari Carnes
Chop-suey Chinese
Richter Supermarkt
Wellington Importadora
HILARION-Abastos
Ernst Handel
ect ...
Thanks
Stephen K. MiyasatoSee if this helps.
http://milambda.blogspot.com/2005/0...s-as-array.html
AMB
"Stephen K. Miyasato" wrote:

> I'm trying to get all the values in one field
>
> i.e from Northwind
> Select ShipName from orders
> where ShippedDate = Cast(GetDate() as Varchar(11)
> I need the output to be like as one value with char(13) + char(10)
> Vins et als Chevalier
> Toms Spezialit?ten
> Hanari Carnes
> Victuailles en stock
> Suprêmes délices
> Hanari Carnes
> Chop-suey Chinese
> Richter Supermarkt
> Wellington Importadora
> HILARION-Abastos
> Ernst Handel
> ect ...
> Thanks
> Stephen K. Miyasato
>
>|||You can do something like this:
declare @.res nvarchar(4000)
set @.res = ''
select @.res = @.res + ShipName + char(13) + char(10) from Orders where Shippe
dDate is null
print @.res
Stephen K. Miyasato wrote:
> I'm trying to get all the values in one field
> i.e from Northwind
> Select ShipName from orders
> where ShippedDate = Cast(GetDate() as Varchar(11)
> I need the output to be like as one value with char(13) + char(10)
> Vins et als Chevalier
> Toms Spezialitten
> Hanari Carnes
> Victuailles en stock
> Suprmes dlices
> Hanari Carnes
> Chop-suey Chinese
> Richter Supermarkt
> Wellington Importadora
> HILARION-Abastos
> Ernst Handel
> ect ...
> Thanks
> Stephen K. Miyasato
>|||Stephen K. Miyasato (miyasat@.flex.com) writes:
> I'm trying to get all the values in one field
> i.e from Northwind
> Select ShipName from orders
> where ShippedDate = Cast(GetDate() as Varchar(11)
> I need the output to be like as one value with char(13) + char(10)
> Vins et als Chevalier
> Toms Spezialitten
> Hanari Carnes
> Victuailles en stock
> Suprmes dlices
> Hanari Carnes
> Chop-suey Chinese
> Richter Supermarkt
> Wellington Importadora
> HILARION-Abastos
> Ernst Handel
> ect ...
Which version of SQL Server?
If you are on SQL 2000, you need to run a cursor. (The solutions posted
in the thread relies on undefined behaviour, and could give undesired
results.)
On SQL 2005 you can do as in this example:
select CustomerID,
substring(OrdIdList, 1, datalength(OrdIdList)/2 - 1)
-- strip the last ',' from the list
from
Customers c cross apply
(select convert(nvarchar(30), OrderID) + ',' as [text()]
from Orders o
where o.CustomerID = c.CustomerID
order by o.OrderID
for xml path('')) as Dummy(OrdIdList)
go
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||I tried this on a field with datatype Ntext but could not get it to work.
I also tried cast(memo, VarChar(8000)) which failed to give the right
result.
Any ideas on how to convert ntext to make the solution below to work?
Thanks
Stephen K. Miyasato
"Sericinus hunter" <serhunt@.flash.net> wrote in message
news:e%Q0g.10712$%m4.3143@.newssvr33.news.prodigy.com...
> You can do something like this:
> declare @.res nvarchar(4000)
> set @.res = ''
> select @.res = @.res + ShipName + char(13) + char(10) from Orders where
> ShippedDate is null
> print @.res
> Stephen K. Miyasato wrote:|||Stephen K. Miyasato (miyasat@.flex.com) writes:
> I tried this on a field with datatype Ntext but could not get it to work.
> I also tried cast(memo, VarChar(8000)) which failed to give the right
> result.
> Any ideas on how to convert ntext to make the solution below to work?
So the really column you have is next? And the values are typically > 4000
characters?
This is not going be fun at all. I think my two prime suggestions
would be:
1) Do it client-side
2) Upgrade to SQL 2005 where you can use the new nvarchar(MAX) data type,
which does not have all the restrictions of the ntext data type.
The problem on SQL 2000 is that you cannot work with ntext variables,
so you would have to do this in a table column. And to make it even more
fun, you would have to use the UPDATETEXT command. You can read about it
in Books Online, but you will have to play around a bit to understand
how it works.
Why don't you do describe the actual business problem. Maybe there is a
much better solution?
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||The field are less then Varchar(4000) and I can use a cast function into a
#temp table
cast(memo as varChar(4000))))AS memo , DateEntered)
How does one iterate through the table looking for a var
DECLARE @.iFoundAt int
set @.iFoundAt = (Select CHARINDEX('Apr 18 2006' ,memo) from Flags where
PatNo = 3760
if @.iFoundAt > 0
BEGIN
-- iterate though table and add records where 'Apr 18 2006' only, found
END
Business problem. I have a list of Clinical reminders with each record
being a reminder.
Description memo DateEntered
Stool 4/18/2006 - Done 4/18/2006
Ha1c 4/16/2006 - 6.5 4/16//2006
I want to query the flags table and insert the text into a word processing
document on the client side.
Thanks
Stephen K. Miyasato
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns97A9D8AC3D2DEYazorman@.127.0.0.1...
> Stephen K. Miyasato (miyasat@.flex.com) writes:
> So the really column you have is next? And the values are typically > 4000
> characters?
> This is not going be fun at all. I think my two prime suggestions
> would be:
> 1) Do it client-side
> 2) Upgrade to SQL 2005 where you can use the new nvarchar(MAX) data type,
> which does not have all the restrictions of the ntext data type.
> The problem on SQL 2000 is that you cannot work with ntext variables,
> so you would have to do this in a table column. And to make it even more
> fun, you would have to use the UPDATETEXT command. You can read about it
> in Books Online, but you will have to play around a bit to understand
> how it works.
> Why don't you do describe the actual business problem. Maybe there is a
> much better solution?
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thanks All,
I figured it out
Stephen K. Miyasato
"Stephen K. Miyasato" <miyasat@.flex.com> wrote in message
news:OGg6cc5YGHA.1180@.TK2MSFTNGP03.phx.gbl...
> The field are less then Varchar(4000) and I can use a cast function into a
> #temp table
> cast(memo as varChar(4000))))AS memo , DateEntered)
> How does one iterate through the table looking for a var
> DECLARE @.iFoundAt int
> set @.iFoundAt = (Select CHARINDEX('Apr 18 2006' ,memo) from Flags where
> PatNo = 3760
> if @.iFoundAt > 0
> BEGIN
> -- iterate though table and add records where 'Apr 18 2006' only, found
> END
> Business problem. I have a list of Clinical reminders with each record
> being a reminder.
> Description memo DateEntered
> Stool 4/18/2006 - Done 4/18/2006
> Ha1c 4/16/2006 - 6.5 4/16//2006
> I want to query the flags table and insert the text into a word processing
> document on the client side.
> Thanks
> Stephen K. Miyasato
> "Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
> news:Xns97A9D8AC3D2DEYazorman@.127.0.0.1...
>sqlsql

concatenate a text data type

Hi,
How to concatenate a text data type value with another text data type
value or varchar data type value.
Regards
KrishnaKrishna (krishna_hot@.hotmail.com) writes:
> How to concatenate a text data type value with another text data type
> value or varchar data type value.
You will have to look into UPDATETEXT.
Note that you cannot assign variables of the type text at all.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

concatenate a string within a loop from a temp table

I need help.

I have a large table that looks like this.

(ID INT NOT NULL IDENTITY(1,1),PK INT , pocket VARCHAR(10))

1, 1, p1
2, 1, p2
3, 2, p3
4, 2, p4
5, 3, p5
6, 3, p6
7, 4, p7
8, 5, p1
9, 5, p2
10,5, p83

i would like to loop through the table and concatenate the pocket filed for all the records that has the same pk. and insert the pk and the concatenated string into another table in a timely manner.

can anyone help?

Emad

An easy way (but slow if you have a big table) to do this is to use a cursor. Bellow is a fully functional example.

Hope it helps!

DECLARE @.MyTable TABLE (ID INT NOT NULL,PK INT , pocket VARCHAR(10))

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (1, 1, 'p1')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (2, 1, 'p2')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (3, 2, 'p3')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (4, 2, 'p4')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (5, 3, 'p5')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (6, 3, 'p6')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (7, 4, 'p7')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (8, 5, 'p1')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (9, 5, 'p2')

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (10,5, 'p83')

DECLARE @.MyResult TABLE (PK Int, SumOfPockets Varchar(4000))

DECLARE @.PK Int

DECLARE @.MyString Varchar(4000)

DECLARE cMyTable CURSOR LOCAL FAST_FORWARD FOR

SELECT PK FROM @.MyTable GROUP BY PK

OPEN cMyTable

FETCH NEXT FROM cMyTable INTO @.PK

WHILE @.@.FETCH_STATUS=0

BEGIN

SET @.MyString = ''

SELECT @.MyString = @.MyString + pocket FROM @.MyTable WHERE PK = @.PK

INSERT INTO @.MyResult(PK, SumOfPockets) VALUES (@.PK, @.MyString)

FETCH NEXT FROM cMyTable INTO @.PK

END

CLOSE cMyTable

DEALLOCATE cMyTable

SELECT * FROM @.MyResult

|||

Doru,

This looks like a great solution although i have a very huge table and i ran out of resources when i tried to implement your solution. I was hopping i can do it with some loops and temp tables. do you have any other ideas.

Emad

|||i CAN'T USE LOCAL VARIABLES BECAUSE THE RESULT OF THE CONCATENATION WOULD BE MORE THAN 8000 CHAR.|||

Emadkb wrote:

i CAN'T USE LOCAL VARIABLES BECAUSE THE RESULT OF THE CONCATENATION WOULD BE MORE THAN 8000 CHAR.

If you're running on SQL Server 2005, you can use VARCHAR(MAX) and NVARCHAR(MAX) variables in your string concatenation.

|||we are still on SQL 2000|||

Hi Emad,

If Varchar(8000) is not big enough then you'll have to use Text datatype in your destination table. This will slow down the whole thing even more :-(.

A solution that should work, but which is very slow, is:


DECLARE @.MyTable TABLE (ID INT NOT NULL,PK INT , pocket VARCHAR(10))

INSERT INTO @.MyTable(ID, PK, pocket) VALUES (1, 1, 'p1')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (2, 1, 'p2')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (3, 2, 'p3')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (4, 2, 'p4')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (5, 3, 'p5')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (6, 3, 'p6')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (7, 4, 'p7')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (8, 5, 'p1')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (9, 5, 'p2')
INSERT INTO @.MyTable(ID, PK, pocket) VALUES (10,5, 'p83')


CREATE TABLE #MyResult (PK Int, SumOfPockets Text)
DECLARE @.PK Int, @.pocket Varchar(10), @.prev_PK Int
DECLARE @.ptrColText Varbinary(16)

DECLARE cMyTable CURSOR LOCAL FAST_FORWARD FOR
SELECT PK, pocket FROM @.MyTable

OPEN cMyTable
FETCH NEXT FROM cMyTable INTO @.PK, @.pocket

WHILE @.@.FETCH_STATUS=0
BEGIN
IF @.PK = @.prev_PK
UPDATETEXT #MyResult.SumOfPockets @.ptrColText NULL 0 @.pocket
ELSE
BEGIN
INSERT INTO #MyResult(PK, SumOfPockets)
VALUES (@.PK, @.pocket)

SELECT @.ptrColText = TEXTPTR(SumOfPockets)
FROM #MyResult WHERE PK = @.PK

SET @.prev_PK = @.PK
END

FETCH NEXT FROM cMyTable INTO @.PK, @.pocket

END
CLOSE cMyTable
DEALLOCATE cMyTable

SELECT * FROM #MyResult


|||

This looks exactly like i want. Excellent work. I will try it and let you know how it goes.

Emad

Tuesday, March 27, 2012

Concat and convert datetime

Hi,
I have two columns name indate and intime. Both of these fields are in
varchar type. I need to concat two such a way that it looks like a datetime
type. Or if there is a way to convert this varchar type into datetime
datatype then that would work too.
Thanks.
ImranCan you show us the table structure and some sample data?
http://www.aspfaq.com/5006
--
http://www.aspfaq.com/
(Reverse address to reply.)
"Imran Prasla" <ImranPrasla@.discussions.microsoft.com> wrote in message
news:A67474DE-36E9-48DA-A07C-A583F159E7DB@.microsoft.com...
> Hi,
> I have two columns name indate and intime. Both of these fields are in
> varchar type. I need to concat two such a way that it looks like a
datetime
> type. Or if there is a way to convert this varchar type into datetime
> datatype then that would work too.
> Thanks.
> Imran
>|||Imran
CREATE TABLE #Test
(
col1 VARCHAR(10),
col2 VARCHAR(10)
)
GO
INSERT INTO #Test VALUES ('20040101','15:00')
INSERT INTO #Test VALUES ('20040102','22:00')
GO
SELECT CONVERT(DATETIME,col1,120)+CONVERT(DATETIME,col2,120)
FROM #Test
"Imran Prasla" <ImranPrasla@.discussions.microsoft.com> wrote in message
news:A67474DE-36E9-48DA-A07C-A583F159E7DB@.microsoft.com...
> Hi,
> I have two columns name indate and intime. Both of these fields are in
> varchar type. I need to concat two such a way that it looks like a
datetime
> type. Or if there is a way to convert this varchar type into datetime
> datatype then that would work too.
> Thanks.
> Imran
>|||CREATE TABLE #Test
(
col1 VARCHAR(10),
col2 VARCHAR(10)
)
GO
INSERT INTO #Test VALUES ('20040101','15:00')
INSERT INTO #Test VALUES ('20040102','22:00')
GO
SELECT CONVERT(DATETIME,col1,120)+CONVERT(DATETIME,col2,120)
FROM #Test
SELECT CONVERT(DATETIME,col1 + ' ' + col2)
FROM #Test
DROP TABLE #Test
--? Is that what you're wanting?
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:eWRjUGrtEHA.1336@.tk2msftngp13.phx.gbl...
> Imran
> CREATE TABLE #Test
> (
> col1 VARCHAR(10),
> col2 VARCHAR(10)
> )
> GO
> INSERT INTO #Test VALUES ('20040101','15:00')
> INSERT INTO #Test VALUES ('20040102','22:00')
> GO
> SELECT CONVERT(DATETIME,col1,120)+CONVERT(DATETIME,col2,120)
> FROM #Test
> "Imran Prasla" <ImranPrasla@.discussions.microsoft.com> wrote in message
> news:A67474DE-36E9-48DA-A07C-A583F159E7DB@.microsoft.com...
> > Hi,
> > I have two columns name indate and intime. Both of these fields are in
> > varchar type. I need to concat two such a way that it looks like a
> datetime
> > type. Or if there is a way to convert this varchar type into datetime
> > datatype then that would work too.
> >
> > Thanks.
> > Imran
> >
>

concat all col2 values for each col1, and add sum(col3) (was "query help")

Hi,
Can anybody help me to create a single query? I have this problem.

CREATE TABLE t1 (
col1 VARCHAR(100)
, col2 VARCHAR(100)
, col3 INT)

INSERT INTO t1 VALUES('A001','Tom',30)
INSERT INTO t1 VALUES('A001','Rick',40)
INSERT INTO t1 VALUES('A001','Harry',10)

INSERT INTO t1 VALUES('A002','Peter',50)
INSERT INTO t1 VALUES('A002','Sam',50)

INSERT INTO t1 VALUES('A003','Fred',50)

I want a resultset like this ...
i.e col1 col2(all the values would be represented in a single row for each col1) and sum(col3)

(Note: There can be maximum three records for each col1 record,i.e for A001 there can be maximum three records)

A001 Tom Rick Harry 80 --sum(col3)
A002 Peter Sam NULL 100
A003 Fred NULL NULL 50

Any help would be greatly appreciated !!(Note: There can be maximum three records for each col1 record,i.e for A001 there can be maximum three records)
Based on this the below works. I think it is about as efficient as it can be though verbose for the sake of transparency :)

SET NOCOUNT ON
CREATE TABLE t1 (
col1 VARCHAR(100)
, col2 VARCHAR(100)
, col3 INT)

INSERT INTO t1 VALUES('A001','Tom',30)
INSERT INTO t1 VALUES('A001','Rick',40)
INSERT INTO t1 VALUES('A001','Harry',10)
INSERT INTO t1 VALUES('A002','Peter',50)
INSERT INTO t1 VALUES('A002','Sam',50)
INSERT INTO t1 VALUES('A003','Fred',50)

SELECT Col1,
Col2a,
Col2b,
Col2c,
SUM(Col3) AS TheTotal
FROM --Pivot data
(SELECT TOP 100 PERCENT
Col1,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
ORDER BY
B.Col2) AS Col2a,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
AND B.Col2 NOT IN (SELECT TOP 1 Col2
FROM dbo.t1 AS C
WHERE C.Col1 = A.Col1
ORDER BY
C.Col2)
ORDER BY
B.Col2) AS Col2b,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
AND B.Col2 NOT IN (SELECT TOP 2 Col2
FROM dbo.t1 AS C
WHERE C.Col1 = A.Col1
ORDER BY
C.Col2)) AS Col2c,
Col3
FROM dbo.t1 AS A
ORDER BY
Col1,
Col2) AS DerT
GROUP BY
Col1,
Col2a,
Col2b,
Col2c

DROP TABLE t1

SET NOCOUNT OFF
HTH|||Based on this the below works. I think it is about as efficient as it can be though verbose for the sake of transparency :)

Awesome, as usual...thanks a ton Pootie!!:rolleyes:|||nevermind, you have a better solution above.|||As a possible alternative, blindman's neat function here could be adapted, resulting in a much simpler query:

SELECT col1, dbo.Concat_ICD(col1) as TheNames, Sum(col3) as TheTotal
FROM t1
GROUP BY col1

http://www.dbforums.com/showthread.php?t=1605725

This would not produce the visible NULL in the result, but I was presuming that wasn't a requirement.|||The function is your best solutions, because it works for any number of records.

By the way, I wish I could take credit for that function, but it is actually one of the many things I have learned from participating in this forum over that past few years.|||Actually that was the solution I hoped to use - it allows n values to be concatenated. However I read the requirement as the return putting the names into three columns rather than one. If this isn't a requirement then defo go with Blindman's solution.|||I wanted it in three different columns.So I used Pootie's one.
Anyways,Thanks everybody for their valuable info.|||for comparison purposes, here is the equivalent query in mysql --select col1
, group_concat(col2)
, sum(col3)
from daTable
group
by col1:)|||I'm calling you on that one Rudy. Did you read the requirements carefully?|||...but here is a shorter method of coding it for SQL Server:SET NOCOUNT ON
CREATE TABLE #t1
(col1 VARCHAR(100),
col2 VARCHAR(100),
col3 INT)

INSERT INTO #t1 VALUES('A001','Tom',30)
INSERT INTO #t1 VALUES('A001','Rick',40)
INSERT INTO #t1 VALUES('A001','Harry',10)
INSERT INTO #t1 VALUES('A002','Peter',50)
INSERT INTO #t1 VALUES('A002','Sam',50)
INSERT INTO #t1 VALUES('A003','Fred',50)

select A.col1,
min(A.col2) as name1,
min(B.col2) as name2,
min(C.col2) as name3,
max(coalesce(A.col3, 0) + coalesce(B.col3, 0) + coalesce(C.col3, 0)) as col3total
from #t1 A
left outer join #t1 B on A.col1 = B.col1 and A.col2 < B.col2
left outer join #t1 C on B.col1 = C.col1 and B.col2 < C.col2
group by A.col1

drop table #t1|||I'm calling you on that one Rudy. Did you read the requirements carefully?
oh, SHEEEEEEEESH, okay :S
select col1
, group_concat(col2 separator ' ')
, sum(col3)
from t1
group
by col1|||for comparison purposes, here is the equivalent query in mysql --select col1
, group_concat(col2)
, sum(col3)
from daTable
group
by col1:)

...and that would be just great if it was a real ANSI compliant databa...

Oh, never mind|||pot? meet kettle

kettle? meet pot

:p|||That was great,thank you Blindman.And thank you all of you for your help.

Sunday, March 25, 2012

Computed Columns - Is Persisted

Does anyone know why I am unable to make this a persisted column?
create table test (
a varchar(500),
b as (case(left([a], 4)) when 'Test' then 1 else 0 end)
)
After created I go into SSMS and am unable to modify 'IsPersisted' column
from NO to YES"paul" <paul@.discussions.microsoft.com> wrote in message
news:23A62435-6274-4341-A100-0210B0520A7F@.microsoft.com...
> Does anyone know why I am unable to make this a persisted column?
> create table test (
> a varchar(500),
> b as (case(left([a], 4)) when 'Test' then 1 else 0 end)
> )
> After created I go into SSMS and am unable to modify 'IsPersisted' column
> from NO to YES
create table test
(
a varchar(500),
b as (case(left([a], 4)) when 'Test' then 1 else 0 end) persisted
)
David|||Thanks David. When I ran this it didn't work. Then I realized the database
I was running it on was sql2000 version. Once I tried it on sql2005, worked
like a charm.
"David Browne" wrote:

> "paul" <paul@.discussions.microsoft.com> wrote in message
> news:23A62435-6274-4341-A100-0210B0520A7F@.microsoft.com...
> create table test
> (
> a varchar(500),
> b as (case(left([a], 4)) when 'Test' then 1 else 0 end) persisted
> )
> David
>
>

Thursday, March 22, 2012

Computed Column calculations

If i want to split a computed column into two or more columns based on
the the length (its a varchar) of the computed column, how often will
sql server determine what the computed column is? roughly along the
lines of

select comp_col,'comp_col 2'=
case when datalength(comp_col)<=100 then comp_col
else left(comp_col,99) + '~' end,
datalength(comp_col)
from aTable

As you can see, in this scenario we use the computed coulumn,
comp_col, in a few places, so does SQL server need to calculate this
each time? I'm playing with this on the basis that it does and thus
trying to shift the computed column out to a variable and then
manipulte and return from their, but that has its own problems when
you throw in additional parameters (trying to join table udf's) so if
SQL server is smart enough to not calculate the column each time I
would save a lot of hassle?

Cheers DaveDMAC (drmcl@.drmcl.free-online.co.uk) writes:
> If i want to split a computed column into two or more columns based on
> the the length (its a varchar) of the computed column, how often will
> sql server determine what the computed column is? roughly along the
> lines of
> select comp_col,'comp_col 2'=
> case when datalength(comp_col)<=100 then comp_col
> else left(comp_col,99) + '~' end,
> datalength(comp_col)
> from aTable
> As you can see, in this scenario we use the computed coulumn,
> comp_col, in a few places, so does SQL server need to calculate this
> each time? I'm playing with this on the basis that it does and thus
> trying to shift the computed column out to a variable and then
> manipulte and return from their, but that has its own problems when
> you throw in additional parameters (trying to join table udf's) so if
> SQL server is smart enough to not calculate the column each time I
> would save a lot of hassle?

Unless you are calling a scalar UDF in the expression for the computed
column, I would not be too worried. While there probably is some overhead,
it pales in comparison with time for disk access etc.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Adding storage for a computed column requires negligible resources,
especially wrt disk access. However, SQL Server imposes restrictions
on how computed columns can be used. Thus, I don't see the value of
using computed columns - you are accepting denormalization by creating
the computed column in the first place, why not take it one efficient
step further?|||(maxl@.msn.com) writes:
> Adding storage for a computed column requires negligible resources,
> especially wrt disk access. However, SQL Server imposes restrictions
> on how computed columns can be used. Thus, I don't see the value of
> using computed columns - you are accepting denormalization by creating
> the computed column in the first place, why not take it one efficient
> step further?

You can have it both ways: you can materialize your computed column
by adding an index on it.

If you stay with plain vanilla columns, and have a column of which the
value is derived from other columns, you will have to compute that
value in a trigger. This can have some overhead when inserting data.
(More overhead than of an indexed computed columns.)

Myself, I have used computed columns only very occassionally. Once
it was a fairly complex expression for an important test. Another case
it was a simple forumula, but the value in question is essential and
queried in several places. And before I reworked this data, there was
such a column - non-computed - in another table.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Unfortunatley, in this instance I do use UDF's in the computed column,
Actually I use one which calls three more, the third of which uses a
cursor to make have a dozen logical if's on some data to see wether the
data should be in front of or after another value, be prefixed by or
suffixed by something else which makes it dead slow anyway. The problem
is obviously the business logic that says a bunch of rules needs to be
applied to generate an items description. This description can have from
1 to 80 values which need all the rules applied to each one, hence the
cursor, hence the UDF, hence the headache. Now I need to substitute some
replacements to words inside this finished description which comes from
, yup another cursor to rattle thru a few thousand 'potential'
replacement to check for a match. Eventually I want to know if the
computed description, after the replacements have been applied, is
bigger than 100 chars and get a list. So far I've narrowed it down to 20
hours for 125,000 line items. But the potential replacemens are growing
rapidly and hurting so I'm not too happy a bunny right now.

Dave

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||anonymous@.devdex.com (anonymous@.devdex.com) writes:
> Unfortunatley, in this instance I do use UDF's in the computed column,
> Actually I use one which calls three more, the third of which uses a
> cursor to make have a dozen logical if's on some data to see wether the
> data should be in front of or after another value, be prefixed by or
> suffixed by something else which makes it dead slow anyway. The problem
> is obviously the business logic that says a bunch of rules needs to be
> applied to generate an items description. This description can have from
> 1 to 80 values which need all the rules applied to each one, hence the
> cursor, hence the UDF, hence the headache. Now I need to substitute some
> replacements to words inside this finished description which comes from
> , yup another cursor to rattle thru a few thousand 'potential'
> replacement to check for a match. Eventually I want to know if the
> computed description, after the replacements have been applied, is
> bigger than 100 chars and get a list. So far I've narrowed it down to 20
> hours for 125,000 line items. But the potential replacemens are growing
> rapidly and hurting so I'm not too happy a bunny right now.

Now, does that sound ugly or what?

In this case, I would consider materializing the column. It doesn't sound
like this would be possible by indexing the column, so you would have a
plain normal column, and the update from a trigger.

Of course, that would incur an overhead when inserting rows to the
table. And that the components that make up this ugly beast are somewhat
static.

In the short term, this may not be possible. But it may be a good idea
to get all the values that you need to work with into a temp table,
so that the column is at least computed not more than once for each row.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||In other words, store the values.|||Obviously the values are stored, but they are used to create the
description so the description itself is never stored as there is no
point to store it when it can chabge all the time either as it grows or
as it is ammended.
Think of it this way, people are made up of characteristics which have
values eg
hair colour - grey, blonde, brunette,
eye colour - green, blue
country of origin - british,american,
sex - male,female etc
thus, the description of one person is, 'female, blonde, green eyes,
british born'. The description includes the stored values, but is itself
derived, that way, as the subject gains more values to the
characteristics, or is edited, or the rules change (ie we applied a born
label to the country characteristic to make the value read british born
which may change to born in britain ie the value is now prefixed instead
of being suffixed by new text, thus the description is never stored
always computed becaue you do not want to store with a value every
label, just store the label once and apply the current one to the
description as it is derived, obvious really.. And now somebody wants
'born' to be 'brn' or british to be gb etc which is applied after the
description is computed, but the list of abbreviations is prohibitive
and NOT related to anything, just a big list of word replacements.

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||anonymous@.devdex.com (anonymous@.devdex.com) writes:
> And now somebody wants 'born' to be 'brn' or british to be gb etc which
> is applied after the description is computed, but the list of
> abbreviations is prohibitive and NOT related to anything, just a big
> list of word replacements.

Could you stash those abbreviate into a table? And then the user-defined
functions that builds the Description string, could take an argument
on whether to look up that table, and then you would make the abbrivated
string a second computed column?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks all for your contributions. I do indeed have the abbreviated
description as a computed column based on the results of the
replacements to the computed descriptions after having looped thru the
abbreviations table looking for matches. As this is the bit that takes
the most ammount of time this is where i can save the most time which I
propose to do by simply calcualting the abbreviated description once and
using the stored value for as many retrievals as I can get away with.
That way all I need to track ('all' he says in an almost casual way) are
any changes that might result in changes to the computed column and then
re-calculate only those, which I can schedule as a job. The only real
time access to users for the abbreviated description (other than
exports and prints) is via a button so this is always calculated in real
time and so will always be accurate (about 1.5 secs per click) and if
the user only reads the descriptions I have nothing to re-calculate. The
only pain with this method are those changes to the abbreviations table
itself, given that we do not know which descriptions are affected by new
or updated abbreviations we need to re-calculate them all which is the
big pain (handled by a simple trigger from the abbreviations table to
set a flag for an overnight run to pick up on) The core problem is
obviously the lack of a relationship between the abbreviations and the
various pieces of the computed descriptions to which they apply.

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!sqlsql

Monday, March 19, 2012

composite index

Hi

I have a table with a 3 column composite cluster index in sql 2005

table1 ( a uniqueidentifier, b uniqueidentifier, c int, d varchar(1000))

the composite index is on a,b,c

when I write a query

select * from table1 where a = 'asd' and b = 'afasddf'

the query analyzer uses cluster index seek

but if I use the table in a join

select * from table1 inner join table2 on table1.a = table2.a

and table1.b = 'sdsfds'

the query optimizer is using a cluster index scan on b, I can't limit the query on table1.b in a join?

thanks

Pauli

create an index on t1.b should do it.

e.g.

create table #t1(i uniqueidentifier, j uniqueidentifier, k int, primary key(i,j,k))
create table #t2(i uniqueidentifier, j uniqueidentifier, k int, primary key(i))
create index ix_ on #t1(j)
go
set showplan_text on
go

declare @.j uniqueidentifier
set @.j=newid()

select * from #t1 t1 join #t2 t2 on t1.i=t2.i and t1.j = @.j
go
set showplan_text off
go
drop table #t2,#t1

|||

paulixml wrote:

the query optimizer is using a cluster index scan on b, I can't limit the query on table1.b in a join?

Apparently in your particular case the optimizer has concluded that doing the join and then filtering the output is more expensive than first performing a table scan and filtering the intermediate output and then doing a join on a much smaller (hopefully) set.

It all really depends the existing indexes in BOTH your tables table1 and table2 (and not just the availability of the index on [a,b,c] in table1), AND, of course, on the actual data - hence, on the statistics available to the optimizer. One could easily cook up various column value distributions for your existing tables/indexes where the very same query will produce very different execution plans.

Sunday, March 11, 2012

complicated select

through no fault of my own, there exists a table with varchar(50)
column, the values of which are comma separated integers. something
like so:
create table foo (
fookey int primary key not null,
foointegerlist1 varchar(50) null,
foointegerlist2 varchar(50) null,
fooprice money null,
foosize int null
)
insert into foo values (1, ',2,3', '1', 10, 100)
insert into foo values (2, '3', '4', null, 100)
insert into foo values (3, ',1,11', ',1,5,6', 10, 100)
insert into foo values (4, ',3,5', ',11', 10, null)
(yes, the leading comma will randomly appear)
unpleasant? yes. scheduled to be redesigned? yes. but for now, this is
what i have to contend with.
the problem i'm having is that there is another table with a very
similar column, like so:
create table foomatch (
foomatchkey int primary key not null,
foomatchintegerlist1 varchar(50) null
foomatchintegerlist2 varchar(50) null,
foomatchprice money null,
foomatchsize int null
)
insert into foomatch values (1, ',2', ',4,5,6', 10, 100)
insert into foomatch values (2, ',1,3,6', ',1', null, null)
the goal of the select statement is to capture all of the rows in the
table foo that "match" the values of a given single row in the foomatch
table. the rules for matching are as follows:
foomatch.foomatchprice = foo.fooprice AND
foomatch.foomatchsize = foo.foosize AND
(this is where it gets dicey for me)
at least ONE of the integer values in foomatch.foomatchintegerlist1
must occur in the foo.foointegerlist1 column AND
at least ONE of the integer values in foomatch.foomatchintegerlist2
must occur in the foo.foointegerlist1 column
where foomatch.foomatchkey = 1 (for example)
i have a user defined function that can turn a comma separated string
value into a single column table. i mention this in case building
tables of the values will make this easier / possible (perhaps with
relational math?)
thanks in advance for any help, and just let me know if you need
clarification,
jasonhi jason,
its quite a long story
use string manipulations to meet the desired solution.
we have several string functions that you can use
thanks,
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"jason" wrote:

> through no fault of my own, there exists a table with varchar(50)
> column, the values of which are comma separated integers. something
> like so:
> create table foo (
> fookey int primary key not null,
> foointegerlist1 varchar(50) null,
> foointegerlist2 varchar(50) null,
> fooprice money null,
> foosize int null
> )
> insert into foo values (1, ',2,3', '1', 10, 100)
> insert into foo values (2, '3', '4', null, 100)
> insert into foo values (3, ',1,11', ',1,5,6', 10, 100)
> insert into foo values (4, ',3,5', ',11', 10, null)
> (yes, the leading comma will randomly appear)
> unpleasant? yes. scheduled to be redesigned? yes. but for now, this is
> what i have to contend with.
> the problem i'm having is that there is another table with a very
> similar column, like so:
> create table foomatch (
> foomatchkey int primary key not null,
> foomatchintegerlist1 varchar(50) null
> foomatchintegerlist2 varchar(50) null,
> foomatchprice money null,
> foomatchsize int null
> )
> insert into foomatch values (1, ',2', ',4,5,6', 10, 100)
> insert into foomatch values (2, ',1,3,6', ',1', null, null)
> the goal of the select statement is to capture all of the rows in the
> table foo that "match" the values of a given single row in the foomatch
> table. the rules for matching are as follows:
> foomatch.foomatchprice = foo.fooprice AND
> foomatch.foomatchsize = foo.foosize AND
> (this is where it gets dicey for me)
> at least ONE of the integer values in foomatch.foomatchintegerlist1
> must occur in the foo.foointegerlist1 column AND
> at least ONE of the integer values in foomatch.foomatchintegerlist2
> must occur in the foo.foointegerlist1 column
> where foomatch.foomatchkey = 1 (for example)
> i have a user defined function that can turn a comma separated string
> value into a single column table. i mention this in case building
> tables of the values will make this easier / possible (perhaps with
> relational math?)
> thanks in advance for any help, and just let me know if you need
> clarification,
> jason
>|||thanks for the reply Jose.
i spent some time looking at string manipulation approaches, i'm not
sure string manipulation will do the trick here. even if we boiled it
down to a single comparison, how would you use string manipulation to
answer the question are any of the comma separated elements in ',1,2,3'
also in the comma separated list ',3,4,5'? i've spent some time going
down this path, and it didn't yield anything close to sane results.
though if you can post something a little more specific, i'd be happy
to entertain the notion.
and yes, my explanations are rarely brief. this isn't even close to one
of my longer ones :)|||If you use tables or table valued user defined functions for the integer
lists it is reasonably straightforward:
Assuming there are tables/udfs foolist1 (fookey, foointeger1) ,
foomatchlist1 (foomatchkey, foomatchinteger1) and foomatchlist2
(foomatchkey, foomatchinteger2)
SELECT f.<column list>
FROM foo f
INNER JOIN foomatch fm
ON fm.foomatchprice = f.fooprice
AND fm.foomatchsize = f.foosize
WHERE EXISTS
(
SELECT NULL FROM foolist1 fl1
INNER JOIN foomatchlist1 fml1
ON fl1.foointeger1 = fml1.foomatchinteger1
WHERE fl1.fookey = f.fookey AND fml1.foomatchkey = fm.foomatchkey
)
AND EXISTS
(
SELECT NULL FROM foolist1 fl1
INNER JOIN foomatchlist2 fml2
ON fl1.foointeger1 = fml2.foomatchinteger2
WHERE fl1.fookey = f.fookey AND fml2.foomatchkey = fm.foomatchkey
)
Jacco Schalkwijk
SQL Server MVP
"jason" <iaesun@.yahoo.com> wrote in message
news:1128432811.427135.37460@.g44g2000cwa.googlegroups.com...
> through no fault of my own, there exists a table with varchar(50)
> column, the values of which are comma separated integers. something
> like so:
> create table foo (
> fookey int primary key not null,
> foointegerlist1 varchar(50) null,
> foointegerlist2 varchar(50) null,
> fooprice money null,
> foosize int null
> )
> insert into foo values (1, ',2,3', '1', 10, 100)
> insert into foo values (2, '3', '4', null, 100)
> insert into foo values (3, ',1,11', ',1,5,6', 10, 100)
> insert into foo values (4, ',3,5', ',11', 10, null)
> (yes, the leading comma will randomly appear)
> unpleasant? yes. scheduled to be redesigned? yes. but for now, this is
> what i have to contend with.
> the problem i'm having is that there is another table with a very
> similar column, like so:
> create table foomatch (
> foomatchkey int primary key not null,
> foomatchintegerlist1 varchar(50) null
> foomatchintegerlist2 varchar(50) null,
> foomatchprice money null,
> foomatchsize int null
> )
> insert into foomatch values (1, ',2', ',4,5,6', 10, 100)
> insert into foomatch values (2, ',1,3,6', ',1', null, null)
> the goal of the select statement is to capture all of the rows in the
> table foo that "match" the values of a given single row in the foomatch
> table. the rules for matching are as follows:
> foomatch.foomatchprice = foo.fooprice AND
> foomatch.foomatchsize = foo.foosize AND
> (this is where it gets dicey for me)
> at least ONE of the integer values in foomatch.foomatchintegerlist1
> must occur in the foo.foointegerlist1 column AND
> at least ONE of the integer values in foomatch.foomatchintegerlist2
> must occur in the foo.foointegerlist1 column
> where foomatch.foomatchkey = 1 (for example)
> i have a user defined function that can turn a comma separated string
> value into a single column table. i mention this in case building
> tables of the values will make this easier / possible (perhaps with
> relational math?)
> thanks in advance for any help, and just let me know if you need
> clarification,
> jason
>|||ahh! i think i see the solution here. yes, i think i can modify my
udf's slightly to make this work. i will give that a shot.
thanks very much
jason

Wednesday, March 7, 2012

complex SQL (for me) step 2

hello, Lets look this Proc :

set nocount on
create table Histo
(
Id varchar(10)primary key,
Week varchar(10),
Project varchar(10)
)
insert into Histo (Id, Week, Project)
select '47','2006-12','Internet'
union all
select '48','2006-13','Internet'
union all
select '49','2006-12','Intranet'

go

select*
from Histo

go

SELECT Project,
[2006-12],
[2006-13]
FROM
(SELECT Project, week, id
FROM histo) s
PIVOT
(
max(id)
FOR Week IN ([2006-12],[2006-13])
) p
ORDER BY [Project]
go

drop table histo

there are 2 blocks of data "hard coded" ([2006-12], [2006-13]) and I want to have something more elegant.

I want to be able to set a begin week and a number of weeks displayed.

Can anyone help me ? Thanks a lot


No help for you there. These value have to be hard coded in the query to work. You will need to use dynamic SQL to build your statement to make it more elegant (well, when speaking of dynamic SQL, elegant is clearly not the right word)

It shouldn't be too hard to dynamically generate the date blocks '[2006-12],[2006-13]', so it might be reasonable.

Go here to vote/discuss on this feature so Microsoft will hear us :)http://lab.msdn.microsoft.com/ProductFeedback/viewfeedback.aspx?feedbackid=6f168645-a348-4644-b369-63849d5b8355

|||Thank a lot for your help !

Saturday, February 25, 2012

Complex query I need help with.

CREATE TABLE test (stk_num varchar(3), avg_num real, import_dt smalldatetime
)
INSERT INTO test values('aaa',27.44,'1/23/2006')
INSERT INTO test values('aaa',25.00,'1/30/2006')
INSERT INTO test values('aaa',1.76,'2/6/2006')
INSERT INTO test values('bbb',2.45,'1/23/2006')
INSERT INTO test values('bbb',3.98,'1/30/2006')
INSERT INTO test values('bbb',11.99,'2/6/2006')
INSERT INTO test values('ccc',0.00,'1/23/2006')
INSERT INTO test values('ccc',0.00,'1/30/2006')
INSERT INTO test values('ccc',4.11,'2/6/2006')
INSERT INTO test values('ddd',1.87,'1/23/2006')
INSERT INTO test values('ddd',3.87,'1/30/2006')
INSERT INTO test values('ddd',0.0,'2/6/2006')
INSERT INTO test values('eee',0.00,'1/23/2006')
INSERT INTO test values('eee',0.00,'1/30/2006')
INSERT INTO test values('eee',0.00,'2/6/2006')
INSERT INTO test values('fff',57.89,'1/23/2006')
INSERT INTO test values('fff',9.80,'1/30/2006')
INSERT INTO test values('fff',10.15,'2/6/2006')
INSERT INTO test values('ggg',22.09,'1/23/2006')
INSERT INTO test values('ggg',2.44,'1/30/2006')
INSERT INTO test values('ggg',17.82,'2/6/2006')
I have a table that contains the stock # and avg # and import date.
I need to return all the records with the same stock numbers that have a +
or - >= 20% change between their avg numbers but only for the last two impor
t
dates.
So for the info given above I need the output to look like this:
Stk_num avg_num import_dt
aaa 25.00 1/30/2006
aaa 1.76 2/6/2006
bbb 3.98 1/30/2006
bbb 11.99 2/6/2006
ccc 0.00 1/30/2006
ccc 4.11 2/6/2006
ddd 3.87 1/30/2006
ddd 0.00 2/6/2006
ggg 2.44 1/30/2006
ggg 17.82 2/6/2006
TIAPlease chaec if this works for you and let me know.
Thanks!
-- BEGIN SCRIPT
select a.stk_num
, a.avg_num
, a.import_dt
from (
select t1.stk_num
, t1.avg_num
, t1.import_dt
from dbo.test t1
join ( SELECT rank
, stk_num
, import_dt import_dt
FROM ( SELECT T1.stk_num
, T1.import_dt
, (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE
T1.stk_num = T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
FROM dbo.test T1) AS X
where rank < 3
) t2
on t1.stk_num = t2.stk_num
and
t1.import_dt = t2.import_dt
) a
inner join
(
select stk_num
from
(
SELECT stk_num
, MAX(CASE rank WHEN 2 THEN import_dt ELSE NULL END) import_dt1
, MAX(CASE rank WHEN 2 THEN avg_num ELSE NULL END) avg_num1
, MAX(CASE rank WHEN 1 THEN import_dt ELSE NULL END) import_dt2
, MAX(CASE rank WHEN 1 THEN avg_num ELSE NULL END) avg_num2
FROM ( SELECT T1.stk_num
, T1.avg_num
, T1.import_dt
, (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE T1.stk_num
= T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
FROM dbo.test T1) AS X
where rank < 3
group by stk_num
) t
where avg_num2 > ((avg_num1*0.2)+avg_num1)
or
(avg_num2 < (avg_num1-(avg_num1*0.2)))
) b
on b.stk_num = a.stk_num
-- END SCRIPT
"Chesster" wrote:

> CREATE TABLE test (stk_num varchar(3), avg_num real, import_dt smalldateti
me)
> INSERT INTO test values('aaa',27.44,'1/23/2006')
> INSERT INTO test values('aaa',25.00,'1/30/2006')
> INSERT INTO test values('aaa',1.76,'2/6/2006')
> INSERT INTO test values('bbb',2.45,'1/23/2006')
> INSERT INTO test values('bbb',3.98,'1/30/2006')
> INSERT INTO test values('bbb',11.99,'2/6/2006')
> INSERT INTO test values('ccc',0.00,'1/23/2006')
> INSERT INTO test values('ccc',0.00,'1/30/2006')
> INSERT INTO test values('ccc',4.11,'2/6/2006')
> INSERT INTO test values('ddd',1.87,'1/23/2006')
> INSERT INTO test values('ddd',3.87,'1/30/2006')
> INSERT INTO test values('ddd',0.0,'2/6/2006')
> INSERT INTO test values('eee',0.00,'1/23/2006')
> INSERT INTO test values('eee',0.00,'1/30/2006')
> INSERT INTO test values('eee',0.00,'2/6/2006')
> INSERT INTO test values('fff',57.89,'1/23/2006')
> INSERT INTO test values('fff',9.80,'1/30/2006')
> INSERT INTO test values('fff',10.15,'2/6/2006')
> INSERT INTO test values('ggg',22.09,'1/23/2006')
> INSERT INTO test values('ggg',2.44,'1/30/2006')
> INSERT INTO test values('ggg',17.82,'2/6/2006')
>
> I have a table that contains the stock # and avg # and import date.
> I need to return all the records with the same stock numbers that have a +
> or - >= 20% change between their avg numbers but only for the last two imp
ort
> dates.
> So for the info given above I need the output to look like this:
> Stk_num avg_num import_dt
> aaa 25.00 1/30/2006
> aaa 1.76 2/6/2006
> bbb 3.98 1/30/2006
> bbb 11.99 2/6/2006
> ccc 0.00 1/30/2006
> ccc 4.11 2/6/2006
> ddd 3.87 1/30/2006
> ddd 0.00 2/6/2006
> ggg 2.44 1/30/2006
> ggg 17.82 2/6/2006
>
> TIA
>|||Chesster,
it's easy to accomplish with a join:
select firsts.*, seconds.import_dt dt2, seconds.avg_num num2
from
(select * from #test t1 where not exists(
select 1 from #test t2 where t1.stk_num = t2.stk_num
and t1.import_dt < t2.import_dt)) firsts,
(select * from #test t1 where(
select count(*) from #test t2 where t1.stk_num = t2.stk_num
and t1.import_dt < t2.import_dt
) = 1) seconds
where firsts.stk_num = seconds.stk_num
and firsts.avg_num NOT between seconds.avg_num*0.8 and
seconds.avg_num*1.2
it'll give you 5 rows, not 10 as you requested. To get 10 rows, use
cross join:
select stk_num,
case when n=1 then import_dt else dt2 end import_dt,
case when n=1 then avg_num else num2 end avg_num
from(
select firsts.*, seconds.import_dt dt2, seconds.avg_num num2
from
(select * from #test t1 where not exists(
select 1 from #test t2 where t1.stk_num = t2.stk_num
and t1.import_dt < t2.import_dt)) firsts,
(select * from #test t1 where(
select count(*) from #test t2 where t1.stk_num = t2.stk_num
and t1.import_dt < t2.import_dt
) = 1) seconds
where firsts.stk_num = seconds.stk_num
and firsts.avg_num NOT between seconds.avg_num*0.8 and
seconds.avg_num*1.2
)t1,
(select 1 n union all select 2) t2|||Today and maybe tomorrow I will not be at work to try this because my
daughter is sick. When I get back to work I will try it and let you know.
Thanks!
"Edgardo Valdez, MCSD, MCDBA" wrote:
> Please chaec if this works for you and let me know.
> Thanks!
> -- BEGIN SCRIPT
> select a.stk_num
> , a.avg_num
> , a.import_dt
> from (
> select t1.stk_num
> , t1.avg_num
> , t1.import_dt
> from dbo.test t1
> join ( SELECT rank
> , stk_num
> , import_dt import_dt
> FROM ( SELECT T1.stk_num
> , T1.import_dt
> , (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE
> T1.stk_num = T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
> FROM dbo.test T1) AS X
> where rank < 3
> ) t2
> on t1.stk_num = t2.stk_num
> and
> t1.import_dt = t2.import_dt
> ) a
> inner join
> (
> select stk_num
> from
> (
> SELECT stk_num
> , MAX(CASE rank WHEN 2 THEN import_dt ELSE NULL END) import_dt1
> , MAX(CASE rank WHEN 2 THEN avg_num ELSE NULL END) avg_num1
> , MAX(CASE rank WHEN 1 THEN import_dt ELSE NULL END) import_dt2
> , MAX(CASE rank WHEN 1 THEN avg_num ELSE NULL END) avg_num2
> FROM ( SELECT T1.stk_num
> , T1.avg_num
> , T1.import_dt
> , (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE T1.stk_n
um
> = T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
> FROM dbo.test T1) AS X
> where rank < 3
> group by stk_num
> ) t
> where avg_num2 > ((avg_num1*0.2)+avg_num1)
> or
> (avg_num2 < (avg_num1-(avg_num1*0.2)))
> ) b
> on b.stk_num = a.stk_num
> -- END SCRIPT
> "Chesster" wrote:
>|||Both queries appear to have worked.
Thanks!
"Edgardo Valdez, MCSD, MCDBA" wrote:
> Please chaec if this works for you and let me know.
> Thanks!
> -- BEGIN SCRIPT
> select a.stk_num
> , a.avg_num
> , a.import_dt
> from (
> select t1.stk_num
> , t1.avg_num
> , t1.import_dt
> from dbo.test t1
> join ( SELECT rank
> , stk_num
> , import_dt import_dt
> FROM ( SELECT T1.stk_num
> , T1.import_dt
> , (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE
> T1.stk_num = T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
> FROM dbo.test T1) AS X
> where rank < 3
> ) t2
> on t1.stk_num = t2.stk_num
> and
> t1.import_dt = t2.import_dt
> ) a
> inner join
> (
> select stk_num
> from
> (
> SELECT stk_num
> , MAX(CASE rank WHEN 2 THEN import_dt ELSE NULL END) import_dt1
> , MAX(CASE rank WHEN 2 THEN avg_num ELSE NULL END) avg_num1
> , MAX(CASE rank WHEN 1 THEN import_dt ELSE NULL END) import_dt2
> , MAX(CASE rank WHEN 1 THEN avg_num ELSE NULL END) avg_num2
> FROM ( SELECT T1.stk_num
> , T1.avg_num
> , T1.import_dt
> , (SELECT COUNT(DISTINCT T2.import_dt) FROM test T2 WHERE T1.stk_n
um
> = T2.stk_num and T1.import_dt <= T2.import_dt) AS rank
> FROM dbo.test T1) AS X
> where rank < 3
> group by stk_num
> ) t
> where avg_num2 > ((avg_num1*0.2)+avg_num1)
> or
> (avg_num2 < (avg_num1-(avg_num1*0.2)))
> ) b
> on b.stk_num = a.stk_num
> -- END SCRIPT
> "Chesster" wrote:
>

complex merging query in sql server 2000

Hello,

I have a Database in a SQL Server 2000 where I have different users
tables with equal fields like this:

id (int)
email (varchar)
name (varchar)
address (varchar)
joinedon (datetime)

I want to merge all the tables in one taking rows with the same email
but taking always the more recent fields based on the 'joinedon' field.
So if I have this four different rows:

Tbl email name address joinedon
----------------------------
T1 j@.smith.com johnathan NULL 01/01/95
T2 j@.smith.com NULL barcelona street 01/01/98
T3 j@.smith.com john valencia street 01/01/97
T4 j@.smith.com john Q NULL 01/01/99

And the final row entered in the new table would be

Tbl email name address joinedon
---------------------------
new j@.smith.com john Q barcelona street 01/01/99

I am trying doing his with union statements, but i am not getting the
real merging of data. Any clue?

Thanks for your help.Considering the following DDL and sample data:

CREATE TABLE TheTable (
id int PRIMARY KEY,
email varchar(50) NOT NULL,
name varchar(50) NULL,
address varchar(50) NULL,
joinedon datetime NOT NULL,
UNIQUE (email, joinedon)
)

INSERT INTO TheTable VALUES (1,
'j@.smith.com','johnathan',NULL,'19950101')
INSERT INTO TheTable VALUES (2, 'j@.smith.com',NULL,'barcelona
street','19980101')
INSERT INTO TheTable VALUES (3, 'j@.smith.com','john','valencia
street','19970101')
INSERT INTO TheTable VALUES (4, 'j@.smith.com','john Q',NULL,'19990101')

The following query provides the expected result:

SELECT x.email, (
SELECT a.name FROM TheTable a
WHERE a.email=x.email
AND a.joinedon=(
SELECT MAX(b.joinedon) FROM TheTable b
WHERE b.email=a.email AND b.name IS NOT NULL
)
) as name, (
SELECT c.address FROM TheTable c
WHERE c.email=x.email
AND c.joinedon=(
SELECT MAX(d.joinedon) FROM TheTable d
WHERE d.email=c.email AND d.address IS NOT NULL
)
) as address,
x.joinedon
FROM (
SELECT email, MAX(joinedon) as joinedon
FROM TheTable
GROUP BY email
) x

Razvan

mrclash wrote:

Quote:

Originally Posted by

Hello,
>
I have a Database in a SQL Server 2000 where I have different users
tables with equal fields like this:
>
id (int)
email (varchar)
name (varchar)
address (varchar)
joinedon (datetime)
>
I want to merge all the tables in one taking rows with the same email
but taking always the more recent fields based on the 'joinedon' field.
So if I have this four different rows:
>
Tbl email name address joinedon
----------------------------
T1 j@.smith.com johnathan NULL 01/01/95
T2 j@.smith.com NULL barcelona street 01/01/98
T3 j@.smith.com john valencia street 01/01/97
T4 j@.smith.com john Q NULL 01/01/99
>
And the final row entered in the new table would be
>
Tbl email name address joinedon
---------------------------
new j@.smith.com john Q barcelona street 01/01/99
>
I am trying doing his with union statements, but i am not getting the
real merging of data. Any clue?
>
Thanks for your help.

Friday, February 17, 2012

Compilation error on store procedure

Hi all,
Here is my error: Server: Msg 245, Level 16, State 1, Procedure NewAcctTypeSP, Line 10
Syntax error converting the varchar value'The account type is already exist' to a column of data type int.
Here is my procedure:
ALTER PROC NewAcctTypeSP
(@.acctType VARCHAR(20), @.message VARCHAR (40) OUT)
AS
BEGIN
--checks if the new account type is already exist
IF EXISTS (SELECT * FROM AcctTypeCatalog WHERE acctType = @.acctType)
BEGIN
SET @.message = 'The account type is already exist'
RETURN @.message
END

BEGIN TRANSACTION
INSERT INTO AcctTypeCatalog (acctType) VALUES (@.acctType)
--if there is an error on the insertion, rolls back the transaction; otherwise, commits the transaction
IF @.@.error <> 0 OR @.@.rowcount <> 1
BEGIN
ROLLBACK TRANSACTION
SET @.message = 'Insertion failure on AcctTypeCatalog table.'
RETURN @.message

END
ELSE
BEGIN

COMMIT TRANSACTION
END

RETURN @.@.ROWCOUNT
END
GO

--execute the procedure
DECLARE @.message VARCHAR (40);
EXEC NewAcctTypeSP 'CDs', @.message;
I am not quite sure where I got a type converting error in my code and anyone can help me solve it?
(p.s. I want to return the @.message value to my .aspx page)
Thanks.

You should probably get rid of the RETURN @.@.ROWCOUNT line. It may be getting confused that sometimes you return a varchar and other times int (the rowcount).
Marcie|||Hi Marcie,
I have get rid of the @.@.RowCount, however, I didn't get value from my @.message, it just returned 'undefined'.
do u know why, and how can i fix it?
Thanks.|||Where is it undefined, from your code or still running your test proc?
Marcie|||

when I executed my .aspx page, I input a duplicated value in a field intentionally and since it should return an error message that I have defined in my dtore procedure, however, I just got 'undefined' instead of my error message...
so, here is my .aspx code:
public function SubmitClick (sender:Object, e:EventArgs) : void
{
if (Page.IsValid)
{
myConnection = new SqlConnection(System.Configuration.ConfigurationSettings.AppSettings("ConnectionString"));
var acctTypeDA : SqlDataAdapter = new SqlDataAdapter ("select * from AcctTypeCatalog", myConnection);

acctTypeDA.InsertCommand = new SqlCommand("NewAcctType", myConnection);
acctTypeDA.InsertCommand.CommandType = CommandType.StoredProcedure;
acctTypeDA.InsertCommand.Parameters.Add(new SqlParameter("@.acctType", SqlDbType.VarChar, 20)).Value = accountType.Text;

var myParm : SqlParameter = acctTypeDA.InsertCommand.Parameters.Add("@.message", SqlDbType.VarChar, 40);
myParm.Direction = ParameterDirection.Output;

<%--try to get value from @.message and assign it to the variable --%>
var msg : String = acctTypeDA.InsertCommand.Parameters("@.message").Value;
msgLabel.Text = msg;

BindGrid();

}
else
{
msgLabel.Text = "The Page contains error!"
}
}
my store procedure:
ALTER PROC NewAcctTypeSP
(@.acctType VARCHAR(20), @.message VARCHAR (40) OUT)
AS
BEGIN
--checks if the new account type is already exist
IF EXISTS (SELECT * FROM AcctTypeCatalog WHERE acctType = @.acctType)
BEGIN
SET @.message = 'The account type is already exist'
RETURN
END

BEGIN TRANSACTION
INSERT INTO AcctTypeCatalog (acctType) VALUES (@.acctType)

--if there is an error on the insertion, rolls back the transaction; otherwise, commits the transaction
IF @.@.error <> 0 OR @.@.rowcount <> 1
BEGIN
ROLLBACK TRANSACTION
SET @.message = 'Insertion failure on AcctTypeCatalog table.'
RETURN
END
ELSE
BEGIN
SET @.message = 'Insertion Successful!'
COMMIT TRANSACTION
END
RETURN
END
GO
I have tested my procedure, it works fine in SQL server, however, don't know why I couldn't get the value from @.message on my .aspx page.
Any idea?
Thanks

|||What language is that? You won't be able to get the value of your OUTPUT parameter until right after the command has executed, something like an ExecuteNonQuery line.
Marcie|||hi Marcie,
I used JScript to create my page with using the store procedure. (JScipt is very similar with C#)
I have reviewed the tutorial from the ASP.net and it doesn't have any ExecuteNonQuery statement on the following example:

<script language="JScript" runat="server">
publicfunction GetEmployees_Click(sender : Object, e : EventArgs) :void
{
var myConnection : SqlConnection =new SqlConnection(System.Configuration.ConfigurationSettings.AppSettings("NWString"));
var myCommand : SqlDataAdapter =new SqlDataAdapter("SalesByCategory", myConnection);
myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;
myCommand.SelectCommand.Parameters.Add(new SqlParameter("@.CategoryName", SqlDbType.NVarChar, 15));
myCommand.SelectCommand.Parameters("@.CategoryName").Value = SelectCategory.Value;
myCommand.SelectCommand.Parameters.Add(new SqlParameter("@.OrdYear", SqlDbType.NVarChar, 4));
myCommand.SelectCommand.Parameters("@.OrdYear").Value = SelectYear.Value;
var ds : DataSet =new DataSet();
myCommand.Fill(ds, "Sales");
MyDataGrid.DataSource=ds.Tables("Sales").DefaultView;
MyDataGrid.DataBind();
}
</script>

So, should I use the ExecuteNonQuery statement to get my output parameter? any example you could show me for getting back the output parameter value with using store procedure(it's ok if that's written in VB or C#)?
i have tried to find some material about this problem, but can't get any of it ...
Thanks again.
|||In the example you show here, the Command gets executed in the myCommand.Fill line, your code didn't have anything like that.
Since you have your command object set up as the InsertCommand of a DataAdapter object, I believe your command would automatically fire when the .Update method is called.
Marcie|||The problem is that return parameters in T-SQL arealways integers. That is whay you got the compilation error, and why you are not getting the value you expect in your client-side code.
Even though you have @.message defined as an output parameter, the RETURN @.message is causing you to error out.|||

Thanks pjmcb,
I already got my problem fixed.

Friday, February 10, 2012

Comparison between 6.5 and 2000 inserts

Hi,
I am kind of baffeled. I have a table with a column of 8 varchar in 2000
and the same in 6.5. When I insert into 2000 with a data length of more tha
n
8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
program inserts into the 6.5 table, but truncates the data but does not fail
.
Does anyone know why this happens. Thanks.
--
New SQL Server DBACheck the value of the ANSI_WARNINGS setting for the session.
HTH
Jerry
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
> Hi,
> I am kind of baffeled. I have a table with a column of 8 varchar in 2000
> and the same in 6.5. When I insert into 2000 with a data length of more
> than
> 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
> program inserts into the 6.5 table, but truncates the data but does not
> fail.
> Does anyone know why this happens. Thanks.
> --
> New SQL Server DBA|||Thank you for the suggestion, I will research. But why does it fail the
insert in 6.5 if I run the insert statement, where the data is longer that 8
char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
also work in sql query analyzer? Thanks.
--
New SQL Server DBA
"Jerry Spivey" wrote:

> Check the value of the ANSI_WARNINGS setting for the session.
> HTH
> Jerry
> "Newbie" <Newbie@.discussions.microsoft.com> wrote in message
> news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
>
>|||ANSI_WARSNINGS is a connection setting. QA probably turn this on by default
where ColdFusion doesn't
(leaves it to default).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:75EDE8A5-D913-4084-A7C9-28D80AC41202@.microsoft.com...[vbcol=seagreen]
> Thank you for the suggestion, I will research. But why does it fail the
> insert in 6.5 if I run the insert statement, where the data is longer that
8
> char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
> also work in sql query analyzer? Thanks.
> --
> New SQL Server DBA
>
> "Jerry Spivey" wrote:
>

Comparison between 6.5 and 2000 inserts

Hi,
I am kind of baffeled. I have a table with a column of 8 varchar in 2000
and the same in 6.5. When I insert into 2000 with a data length of more than
8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
program inserts into the 6.5 table, but truncates the data but does not fail.
Does anyone know why this happens. Thanks.
New SQL Server DBA
Check the value of the ANSI_WARNINGS setting for the session.
HTH
Jerry
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
> Hi,
> I am kind of baffeled. I have a table with a column of 8 varchar in 2000
> and the same in 6.5. When I insert into 2000 with a data length of more
> than
> 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
> program inserts into the 6.5 table, but truncates the data but does not
> fail.
> Does anyone know why this happens. Thanks.
> --
> New SQL Server DBA
|||Thank you for the suggestion, I will research. But why does it fail the
insert in 6.5 if I run the insert statement, where the data is longer that 8
char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
also work in sql query analyzer? Thanks.
New SQL Server DBA
"Jerry Spivey" wrote:

> Check the value of the ANSI_WARNINGS setting for the session.
> HTH
> Jerry
> "Newbie" <Newbie@.discussions.microsoft.com> wrote in message
> news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
>
>
|||ANSI_WARSNINGS is a connection setting. QA probably turn this on by default where ColdFusion doesn't
(leaves it to default).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:75EDE8A5-D913-4084-A7C9-28D80AC41202@.microsoft.com...[vbcol=seagreen]
> Thank you for the suggestion, I will research. But why does it fail the
> insert in 6.5 if I run the insert statement, where the data is longer that 8
> char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
> also work in sql query analyzer? Thanks.
> --
> New SQL Server DBA
>
> "Jerry Spivey" wrote:

Comparison between 6.5 and 2000 inserts

Hi,
I am kind of baffeled. I have a table with a column of 8 varchar in 2000
and the same in 6.5. When I insert into 2000 with a data length of more than
8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
program inserts into the 6.5 table, but truncates the data but does not fail.
Does anyone know why this happens. Thanks.
--
New SQL Server DBACheck the value of the ANSI_WARNINGS setting for the session.
HTH
Jerry
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
> Hi,
> I am kind of baffeled. I have a table with a column of 8 varchar in 2000
> and the same in 6.5. When I insert into 2000 with a data length of more
> than
> 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
> program inserts into the 6.5 table, but truncates the data but does not
> fail.
> Does anyone know why this happens. Thanks.
> --
> New SQL Server DBA|||Thank you for the suggestion, I will research. But why does it fail the
insert in 6.5 if I run the insert statement, where the data is longer that 8
char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
also work in sql query analyzer? Thanks.
--
New SQL Server DBA
"Jerry Spivey" wrote:
> Check the value of the ANSI_WARNINGS setting for the session.
> HTH
> Jerry
> "Newbie" <Newbie@.discussions.microsoft.com> wrote in message
> news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
> > Hi,
> >
> > I am kind of baffeled. I have a table with a column of 8 varchar in 2000
> > and the same in 6.5. When I insert into 2000 with a data length of more
> > than
> > 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
> > program inserts into the 6.5 table, but truncates the data but does not
> > fail.
> > Does anyone know why this happens. Thanks.
> > --
> > New SQL Server DBA
>
>|||ANSI_WARSNINGS is a connection setting. QA probably turn this on by default where ColdFusion doesn't
(leaves it to default).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Newbie" <Newbie@.discussions.microsoft.com> wrote in message
news:75EDE8A5-D913-4084-A7C9-28D80AC41202@.microsoft.com...
> Thank you for the suggestion, I will research. But why does it fail the
> insert in 6.5 if I run the insert statement, where the data is longer that 8
> char, via sql server query analyzer and not via Cold Fusion? Shouldn't it
> also work in sql query analyzer? Thanks.
> --
> New SQL Server DBA
>
> "Jerry Spivey" wrote:
>> Check the value of the ANSI_WARNINGS setting for the session.
>> HTH
>> Jerry
>> "Newbie" <Newbie@.discussions.microsoft.com> wrote in message
>> news:05F1A1E0-A59B-4937-912B-F968C7668FFD@.microsoft.com...
>> > Hi,
>> >
>> > I am kind of baffeled. I have a table with a column of 8 varchar in 2000
>> > and the same in 6.5. When I insert into 2000 with a data length of more
>> > than
>> > 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion
>> > program inserts into the 6.5 table, but truncates the data but does not
>> > fail.
>> > Does anyone know why this happens. Thanks.
>> > --
>> > New SQL Server DBA
>>

Comparing varchar value in int type column

Hi,

I have a varchar(255) field on the control_value table which contains 10,159,711. These values are organization_ids (type int) separated by a common.

I am trying to exclude these organization IDs with the following statement:

DECLARE @.exclude_clients varchar(255)

SELECT @.exclude_clients = value

FROM control_value

WHERE parameter = 'client_excluded'

select * from organization org

where org.organization_id not in (@.exclude_clients)

I get the following error:

Server: Msg 245, Level 16, State 1, Line 7

Syntax error converting the varchar value '10,159,711' to a column of data type int.

Is there anyway around this?

You couldn't use variables as part of in clause.

But you could use dynamic SQL for with task:

Code Snippet

DECLARE @.exclude_clients varchar(255)

SELECT @.exclude_clients = value

FROM control_value

WHERE parameter = 'client_excluded'

declare @.query varchar(1000)

set @.query ='

select * from organization org

where org.organization_id not in ('+@.exclude_clients+')'

EXECUTE(@.query)

Another approach - split @.exclude_clients into table, then use join clause. You could use ideas from this link http://vyaskn.tripod.com/passing_arrays_to_stored_procedures.htm

|||

You can't use 'NOT IN' in this way - i.e. use it to identify multiple values from a comma-separated list provided as a single parameter.

The NOT IN condition as written will cause the entire @.exclude_clients string to be compared with org.organsation_id. However due to datatype precedence, SQL Server is attempting to convert the string to an integer (i.e. the same datatype as org.organsation_id) before the comparison - which is why you're experiencing the error. If the value of @.exclude_clients was '45' then the query wouldn't cause an error.

One way of performing the task would be to build up your SELECT statement dynamically, see below.

Chris

Code Snippet

DECLARE @.exclude_clients VARCHAR(255)

SELECT @.exclude_clients = value

FROM control_value

WHERE parameter = 'client_excluded'

/*

SELECT *

FROM organization org

WHERE org.organization_id NOT IN (@.exclude_clients)

*/

DECLARE @.sql VARCHAR(4000)

SET @.sql = 'SELECT * FROM organisation org WHERE org.organisation_id NOT IN ('

+ @.exclude_clients + ')'

EXEC (@.sql)

|||

Code Snippet


DECLARE @.T1 table(exclude_id int)

insert into @.t1
select 10 union
select 159 union
select 711

or if control_value is a table then


insert into @.t1
select value
from control_value
where parameter= 'client_excluded'

Then you can just use @.t1 as a table in your other query(ies)

select * from organization org
where org.organization_id not in (select exclude_id from @.t1)

|||Thanks Konstantin Kosinsky and Chris Howarth. The dynamic query works fine and it really helpful.

Comparing VarCHAR FIELD with NULL

Hi, I have the following query

SELECT *
FROM PABX
INNER JOIN LOGIN ON (PABX.COD_CLIENTE = LOGIN.COD_CLIENTE)
AND LEFT(LOGIN.TELEFONE1,3) = LEFT(PABX.NRTELEFONE,3)
LEFT JOIN AUXILIAR ON (AUXILIAR.ORIGEM=LOGIN.LOCALIDADE)
WHERE
pabx.COD_cliente = 224 and
SUBSTRING(PABX.NRTELEFONE,4,1) NOT IN ('9', '8', '7')
AND LOGIN.UF = RIGHT(PABX.LOCALIDADE,2)
AND LOGIN.LOCALIDADE <> PABX.LOCALIDADE
AND PABX.CLASSIFICA IS NULL
AND PABX.LOCALIDADE <> AUXILIAR.DESTINO
AND (BLOQUEADO = 0 OR BLOQUEADO IS NULL)

But It has a problem because when AUXILIAR.DESTINO returns null (it means there is no registry) the condition AND PABX.LOCALIDADE <> AUXILIAR.DESTINO doesn't work, like 'SAO PAULO' is different from 'NULL' but for my query no it's not even equal, and this condition ommit the results....how can I solve it ?

PS: Both auxiliar.destino and pabx.localidade is varchar(255)

Thanks

what happens now?

SELECT *
FROM PABX
INNER JOIN LOGIN ON (PABX.COD_CLIENTE = LOGIN.COD_CLIENTE)
AND LEFT(LOGIN.TELEFONE1,3) = LEFT(PABX.NRTELEFONE,3)
LEFT JOIN AUXILIAR ON (AUXILIAR.ORIGEM=LOGIN.LOCALIDADE)
AND PABX.LOCALIDADE <> AUXILIAR.DESTINO
WHERE
pabx.COD_cliente = 224 and
SUBSTRING(PABX.NRTELEFONE,4,1) NOT IN ('9', '8', '7')
AND LOGIN.UF = RIGHT(PABX.LOCALIDADE,2)
AND LOGIN.LOCALIDADE <> PABX.LOCALIDADE
AND PABX.CLASSIFICA IS NULL

Denis the SQL Menace

http://sqlservercode.blogspot.com/


|||

It works, thanks a LOT

I lov u...

|||

you can also change PABX.LOCALIDADE <> AUXILIAR.DESTINO to

PABX.LOCALIDADE <> COALESCE(AUXILIAR.DESTINO,'')

|||

Wich one do you think it's better ?

Thanks

comparing two varchar variable

Table One has for column id = 'AAA-BBB-CCC' , 'AAA-BBB-DDD'
Table Two has column id = 'AAA-BBB-CCCI' , 'AAA-BBB-DDDI'
I want to return all Table One.id which appear in table Two.id by comparing
the varchar column e.g
select One.id from One, Two
where One.id like (Two.id + '%')
Can someone show me a better way. ThanksMaybe this?
select * from one where id in (select id from two)
Ben Nevarez
"mtv" <mtv@.discussions.microsoft.com> wrote in message
news:22EE0FB6-FDFD-4824-A5C8-D4352A83FC4D@.microsoft.com...
> Table One has for column id = 'AAA-BBB-CCC' , 'AAA-BBB-DDD'
> Table Two has column id = 'AAA-BBB-CCCI' , 'AAA-BBB-DDDI'
> I want to return all Table One.id which appear in table Two.id by
> comparing
> the varchar column e.g
> select One.id from One, Two
> where One.id like (Two.id + '%')
> Can someone show me a better way. Thanks|||Thank you Ben but I believe it will not work. Ids in table Two has one
additional letter in the end.
cheers
mtv
"Ben Nevarez" wrote:

> Maybe this?
> select * from one where id in (select id from two)
> Ben Nevarez
>
> "mtv" <mtv@.discussions.microsoft.com> wrote in message
> news:22EE0FB6-FDFD-4824-A5C8-D4352A83FC4D@.microsoft.com...
>
>|||Would something like this help?
=====
CREATE TABLE test1
(
colA VARCHAR(10)
)
GO
CREATE TABLE test2
(
colB VARCHAR(10)
)
GO
INSERT INTO test1 SELECT 'AAABBBCCC'
INSERT INTO test1 SELECT 'XXXYYYZZZ'
INSERT INTO test2 SELECT 'AAABBBCCCD'
GO
SELECT
test1.colA, test2.colB
FROM
test1
INNER JOIN test2 ON test1.colA = LEFT (test2.colB, 9)
=====
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"mtv" <mtv@.discussions.microsoft.com> wrote in message
news:22EE0FB6-FDFD-4824-A5C8-D4352A83FC4D@.microsoft.com...
> Table One has for column id = 'AAA-BBB-CCC' , 'AAA-BBB-DDD'
> Table Two has column id = 'AAA-BBB-CCCI' , 'AAA-BBB-DDDI'
> I want to return all Table One.id which appear in table Two.id by
> comparing
> the varchar column e.g
> select One.id from One, Two
> where One.id like (Two.id + '%')
> Can someone show me a better way. Thanks