Showing posts with label single. Show all posts
Showing posts with label single. Show all posts

Thursday, March 29, 2012

Concatenate single quote...

Hi,
I am trying to concatenate single quotation mark (ie- ' ) in the select query but not succeed.
If I concatenate word India's, how should I have to write the select query..?
Its just query I have not concern with any specific database.select a='India''s', b='India'+'''s'

a b
--- ---
India's India's

(1 row(s) affected)|||Thanx buddy...

Concatenate Columm Values from multiple Rows into a single col

Yes, the order is not guaranteed.
ML
http://milambda.blogspot.com/ML (ML@.discussions.microsoft.com) writes:
> Yes, the order is not guaranteed.
Not even that. You are not even guaranteed to get all rows. For 1, 2, 3, 4
you could get '1,2,3,4' or you could get only '4'.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||That would make the function completely useless - could you give an example,
please? I've tested it with a few typical set-ups and have always found it t
o
return expected results.
ML
http://milambda.blogspot.com/|||ML (ML@.discussions.microsoft.com) writes:
> That would make the function completely useless - could you give an
> example, please? I've tested it with a few typical set-ups and have
> always found it to return expected results.
Check out http://support.microsoft.com/default.aspx?scid=287515, and pay
particular attention to the first sentence under CAUSE.
Nevermind that the article then bend over backwards, to specify things that
may work. For me the conclusion is clear: don't rely on this.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||I agree using functions in the ORDER BY clause in this case is a disaster
waiting to happen, but since my function does not use them at all, were you
able to reproduce the problem anyway?
If you're too busy to play with this, I absolutely understand. I'm just
trying to learn new things every day. I promise I'll stop a few days after
I'm dead. ;)
ML
http://milambda.blogspot.com/|||ML (ML@.discussions.microsoft.com) writes:
> I agree using functions in the ORDER BY clause in this case is a
> disaster waiting to happen, but since my function does not use them at
> all, were you able to reproduce the problem anyway?
My point is not that I can reproduce it here and now. My point is that
what works today, could break tomorrow.
For instance, there are people out there who have defined views in
SQL 2000 which goes:
SELECT TOP 100 PERCENT
..
ORDER BY
and they are happy because when they say:
SELECT * FROM view1
the see the data in order.
Then they move to SQL 2005 and get hit, because the optimizer is now
less likely to return the data in order. The truth was all the time
that without an ORDER BY, the order of the data is undefined.
See also
http://lab.msdn.microsoft.com/produ...b9-3dd863ae6b1c
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||This bug report clears up the matter greatly. Thank you very much. I intend
to include this example in my blog as a warning ASAP.
I've searched the web for this issue, but found no usable references. Thanks
again.
ML
http://milambda.blogspot.com/

Tuesday, March 27, 2012

concat all col2 values for each col1, and add sum(col3) (was "query help")

Hi,
Can anybody help me to create a single query? I have this problem.

CREATE TABLE t1 (
col1 VARCHAR(100)
, col2 VARCHAR(100)
, col3 INT)

INSERT INTO t1 VALUES('A001','Tom',30)
INSERT INTO t1 VALUES('A001','Rick',40)
INSERT INTO t1 VALUES('A001','Harry',10)

INSERT INTO t1 VALUES('A002','Peter',50)
INSERT INTO t1 VALUES('A002','Sam',50)

INSERT INTO t1 VALUES('A003','Fred',50)

I want a resultset like this ...
i.e col1 col2(all the values would be represented in a single row for each col1) and sum(col3)

(Note: There can be maximum three records for each col1 record,i.e for A001 there can be maximum three records)

A001 Tom Rick Harry 80 --sum(col3)
A002 Peter Sam NULL 100
A003 Fred NULL NULL 50

Any help would be greatly appreciated !!(Note: There can be maximum three records for each col1 record,i.e for A001 there can be maximum three records)
Based on this the below works. I think it is about as efficient as it can be though verbose for the sake of transparency :)

SET NOCOUNT ON
CREATE TABLE t1 (
col1 VARCHAR(100)
, col2 VARCHAR(100)
, col3 INT)

INSERT INTO t1 VALUES('A001','Tom',30)
INSERT INTO t1 VALUES('A001','Rick',40)
INSERT INTO t1 VALUES('A001','Harry',10)
INSERT INTO t1 VALUES('A002','Peter',50)
INSERT INTO t1 VALUES('A002','Sam',50)
INSERT INTO t1 VALUES('A003','Fred',50)

SELECT Col1,
Col2a,
Col2b,
Col2c,
SUM(Col3) AS TheTotal
FROM --Pivot data
(SELECT TOP 100 PERCENT
Col1,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
ORDER BY
B.Col2) AS Col2a,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
AND B.Col2 NOT IN (SELECT TOP 1 Col2
FROM dbo.t1 AS C
WHERE C.Col1 = A.Col1
ORDER BY
C.Col2)
ORDER BY
B.Col2) AS Col2b,
(SELECT TOP 1 Col2
FROM dbo.t1 AS B
WHERE A.Col1 = B.Col1
AND B.Col2 NOT IN (SELECT TOP 2 Col2
FROM dbo.t1 AS C
WHERE C.Col1 = A.Col1
ORDER BY
C.Col2)) AS Col2c,
Col3
FROM dbo.t1 AS A
ORDER BY
Col1,
Col2) AS DerT
GROUP BY
Col1,
Col2a,
Col2b,
Col2c

DROP TABLE t1

SET NOCOUNT OFF
HTH|||Based on this the below works. I think it is about as efficient as it can be though verbose for the sake of transparency :)

Awesome, as usual...thanks a ton Pootie!!:rolleyes:|||nevermind, you have a better solution above.|||As a possible alternative, blindman's neat function here could be adapted, resulting in a much simpler query:

SELECT col1, dbo.Concat_ICD(col1) as TheNames, Sum(col3) as TheTotal
FROM t1
GROUP BY col1

http://www.dbforums.com/showthread.php?t=1605725

This would not produce the visible NULL in the result, but I was presuming that wasn't a requirement.|||The function is your best solutions, because it works for any number of records.

By the way, I wish I could take credit for that function, but it is actually one of the many things I have learned from participating in this forum over that past few years.|||Actually that was the solution I hoped to use - it allows n values to be concatenated. However I read the requirement as the return putting the names into three columns rather than one. If this isn't a requirement then defo go with Blindman's solution.|||I wanted it in three different columns.So I used Pootie's one.
Anyways,Thanks everybody for their valuable info.|||for comparison purposes, here is the equivalent query in mysql --select col1
, group_concat(col2)
, sum(col3)
from daTable
group
by col1:)|||I'm calling you on that one Rudy. Did you read the requirements carefully?|||...but here is a shorter method of coding it for SQL Server:SET NOCOUNT ON
CREATE TABLE #t1
(col1 VARCHAR(100),
col2 VARCHAR(100),
col3 INT)

INSERT INTO #t1 VALUES('A001','Tom',30)
INSERT INTO #t1 VALUES('A001','Rick',40)
INSERT INTO #t1 VALUES('A001','Harry',10)
INSERT INTO #t1 VALUES('A002','Peter',50)
INSERT INTO #t1 VALUES('A002','Sam',50)
INSERT INTO #t1 VALUES('A003','Fred',50)

select A.col1,
min(A.col2) as name1,
min(B.col2) as name2,
min(C.col2) as name3,
max(coalesce(A.col3, 0) + coalesce(B.col3, 0) + coalesce(C.col3, 0)) as col3total
from #t1 A
left outer join #t1 B on A.col1 = B.col1 and A.col2 < B.col2
left outer join #t1 C on B.col1 = C.col1 and B.col2 < C.col2
group by A.col1

drop table #t1|||I'm calling you on that one Rudy. Did you read the requirements carefully?
oh, SHEEEEEEEESH, okay :S
select col1
, group_concat(col2 separator ' ')
, sum(col3)
from t1
group
by col1|||for comparison purposes, here is the equivalent query in mysql --select col1
, group_concat(col2)
, sum(col3)
from daTable
group
by col1:)

...and that would be just great if it was a real ANSI compliant databa...

Oh, never mind|||pot? meet kettle

kettle? meet pot

:p|||That was great,thank you Blindman.And thank you all of you for your help.

Concantenating Dates

I am transferring data between DB's. One has DATE and TIME as two individual fields. I need to concantenate them into a single DATETIME field. I am sure the solution is simple but I can't figure it out.
Any advice?
Thanks!If you select from your time column, you should see that it's date is set to January 1st, 1900, like this:
1900-01-01 14:59:27.293

Your date column should show a date as of midnight like this:
2003-07-14 00:00:00.000

If this is the case, you can just add these value together to concatenate them:
select @.Yourdate + @.Yourtime

If this is not the case, you will need to concatenate them as formatted strings and then cast or convert the result to a datetime value.

blindman

Sunday, March 25, 2012

Computed measures and role-playing dimensions

I've got a database with a single Date dimension that is used as a role-playing dimension in the cube. Measure groups have 2-4 different role relationships with the date dimension (e.g. sell date, ship date).

I've got some calculated members, defined in the style used by the Time Intelligence wizard, roughly:

Scope(
{
[Measures].[Amount],
[Measures].[Count]
}
);

( [Period].[Period].[60 calendar days],
[First Date].[Date].[Date].Members ) =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [First Date].[Date].CurrentMember.lag(59) : [First Date].[Date].CurrentMember }
);

( [Period].[Period].[60 calendar days],
[Second Date].[Date].[Date].Members ) =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [Second Date].[Date].CurrentMember.lag(59) : [Second Date].[Date].CurrentMember }
);


End Scope;

Here [First Date] and [Second Date] represent two of my role-playing date dimensions.

What I'm noticing is that this doesn't work: the calculated members are calculated correctly for whichever date dimension is listed last in the script and are apparently erased for whichever one was listed first.

The particular measure group I'm using in my sample query has a relationship with both date roles, and in fact they're identical for this measure group - both are based on the same underlying column in the relational data (that's not true for all measure groups, it just happens to be in this case).

Can someone explain what I'm seeing? Is this a bug, or a subtety that I haven't accounted for? How can I make this work for 2 or more role-playing date dimensions at the same time?

Well, if you will position current coordinate on the [First Date].[Date].[Date] level - then you will see that the first formula is applied correctly. What you probably mean by saying "last formula erases whatever was listed first" is that if you look at the aggregate level - then the aggregation is computed from the last formula. Obviously, at the aggregate level there is a conflict - which formula should be used to aggregate - it could be either first or second. The precendence rules in AS say that the later assignment wins. This is not a bug, after all, single cell cannot be calculated using two different formulas !

Perhaps if you will explain what exact results you would like to see for specific queries - this forum will be able to craft MDX script to solve it.

|||

I neglected to include representative queries, so here's one that corresponds to the names I used in the first post:

select
{
[Measures].[Amount]
} on columns,
{
[Period].[Period].&[60]
} on rows
from
[Database]
where
(
-- [First Date].[Date].&[20061228]
-- [Second Date].[Date].&[20061228]
)

([Period].[Period].&[60] is the same member as [Period].[Period].[60 calendar days])

Given the computations given above, and the fact that [First Date] and [Second Date] actually refer to the same column in the fact table for this measure group, I'd expect that the above query would return the same results with either of the two lines in the where clause uncommented, but that's not the case.

Whichever date dimensions corresponds to the second set of calculations in the script produces the correct result while the other produces a wrong result - I'm not sure exactly what the wrong result represents - it's a larger number than anything I can think up that might make sense (e.g. it's not the aggregation totally ignoring the date dimension in question - it's something else).

|||

Mosha Pasumansky wrote:

Obviously, at the aggregate level there is a conflict - which formula should be used to aggregate - it could be either first or second. The precendence rules in AS say that the later assignment wins. This is not a bug, after all, single cell cannot be calculated using two different formulas !

Why is this obvious? As far as I can see, the calculations are not referencing the same locations in the cube. Please enlighten me! :)

|||

Sorry, I thought it was clear :(

Both calculations apply to the leaf level of their respective Date dimension, i.e. [First Date].[Date].[Date] and [Second Date].[Date].[Date]. So the question is, how the value at ([First Date].[Date].[All Dates], [Second Date].[Date].[All Dates]) should be computed. It can aggregate from either one of lower levels - but depending on which one it will aggregate from, the results will be different, so AS has to choose one or the other.

|||

OK, I can see that there's an ambiguity on those particular members, but I'm not querying those members (or am I?).

So how can I get the results I want?

|||> but I'm not querying those members (or am I?).

Can you please provide the query you are using

> So how can I get the results I want?

Can you please describe what exactly result do you want.

|||I already did - see my second post in this thread.|||I see, that you added the query later, although it is still not clear from your description what is the expected result. But I am going to guess that what you want is either LastChild or LastNonEmpty semiadditive aggregation. If this is true, you need to define your measure as semiadditive with this aggregation function. Note, that you still will only be able either First Date or Second Date, but not both, because there is still a conflict at aggregate level.|||Just to add my 2 cents' worth - based on your comment that "[Period].[Period].&[60] is the same member as [Period].[Period].[60 calendar days]", is this a physical (place-holder) rather than calculated dimension member, to which the trailing-60-day calculation is applied? If so, my guess is that "the fact that [First Date] and [Second Date] actually refer to the same column in the fact table for this measure group" could be significant. If you instead create a [Period].[Period].[60 calendar days] calculated member of the [Period].[Period] hierarchy, do you get the expected results?|||

I'm quite certain that I don't want LastChild or LastNonEmpty (especially since those are EE only features).

What I want is for the two dimensions to act as two independent dimensions without regard for the conflict at ([First Date].[Date].[All],[Second Date].[Date].[All]) since I will never query any value in that slice.

I'm not sure what it is I'm failing to communicate or understand - this seems like it should be straightforward and not particularly unusual. Lots of cubes have multiple date dimensions (and I expect lots of those have those date dimensions all backed up by a single role-playing dimension in the database).

Any suggestions are welcome - for now, I've abandoned the whole notion (being able to get time-based aggregations based on multiple time dimensions) as being inherently not supported by MDX/SSAS.

|||

It must be my failure to understand what you are trying to achive, and I appologize for that. It must something really simple, since it looks straigtforward to you, yet I fail to grasp it. Perhaps you should start a new thread, since this got plenty of replies leading nowhere, and people don't look at threads with complicated histories but do look at fresh threads.

One last note - the cubes with multiple Time dimensions are certainly not unusual, I've seen many of them working fine (even Adventure Works sample features 3 Time dimensions).

|||You didn't mention what the results of trying my earlier suggestion were - ie. creating a calculated member, rather than using an actual member, for [Period].[Period].[60 calendar days]?|||

Sorry, I forgot to reply to your suggestion.

This [Period] dimension is a real, physical dimension in the cube - and is so because I have some measure groups that associate directly with the period dimension (those "limit" measures that Mosha helped me with in another thread).

So, when I say [Period].[Period].[60 Calendar Days] is the same member as [Period].[Period].&[60], that's exactly what I mean - the member with key 60 has the name '60 Calendar Days'.

I really can't experiment with having this be a pure calculated member - I need that physical dimension to be this same dimension so that the relationships within the cube are correct.

Other than getting the "time based intelligence" to work in two time dimensions in the same cube, this all works beautifully. I have an inkling of how to get what I wanted - at the moment, I don't actually need to solve the problem right now, but sooner or later, I'll have to confront it.

I think that the solution is something like this:

Scope(
{
[Measures].[Amount],
[Measures].[Count]
}
);

( [Period].[Period].[60 calendar days],
[First Date].[Date].[Date].Members,
[Second Date].[Date].DefaultMember
) =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [First Date].[Date].CurrentMember.lag(59) : [First Date].[Date].CurrentMember }
);

( [Period].[Period].[60 calendar days],
[Second Date].[Date].[Date].Members,
[First Date].[Date].DefaultMember
) =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [Second Date].[Date].CurrentMember.lag(59) : [Second Date].[Date].CurrentMember }
);


End Scope;

If I get a chance to try it, I'll post back with the results.

|||

FYI - I put a similar test case together in Adventure Works, relating [Date] and [Ship Date] to the same fact date field. Using the [Scenario] dimension &[2] (Budget) member to hold trailing 60-day calculations, only calculations for the 2nd dimension (Ship Date) work. But if a new calculated member like [Scenario].[Scenario].[Trailing60] is used instead, calculations on both [Date] and [Ship Date] seem to work fine.

Since you might not wish to go down the calculated member path, another approach which seems to work is "freezing" the first calculation, to prevent its results being changed by the next calculation. In your scenario, something like:

Scope(
{
[Measures].[Amount],
[Measures].[Count]
},

[Period].[Period].[60 calendar days]
);

Scope( [First Date].[Date].[Date].Members );

this =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [First Date].[Date].CurrentMember.lag(59) : [First Date].[Date].CurrentMember }
);

freeze(this);

End Scope;

( [Second Date].[Date].[Date].Members ) =

Aggregate(
{ [Period].[Period].DefaultMember } *
{ [Second Date].[Date].CurrentMember.lag(59) : [Second Date].[Date].CurrentMember }
);


End Scope;

Tuesday, March 20, 2012

Composite vs Single Primary Key

Our DBA has chosen to use an autonumber column as the primary key for a linking table consisting of 2 other IDs, I'll call them x and y. Another developer is concerned that since our application only keeps track of IDs x and y, not the autonumber ID, that searches will be less efficient because if x and y were a composite key they would be indexed. Is this correct? If so, couldn't you just create an index for these columns to make it equally efficient?

I believe that to eliminate all composite keys is a requirement for second normal form (correct me if I'm wrong), but does assigning an autonumber primary key to a linking table made up of solely 2 IDs have any benefits?

First of all, it's entirely possible to have the primary key (in the relational database sense, the one you can link tables with) different from the CLUSTERING key (the key for the main index of the table), e.g.:

Create Table T(p int primary key nonclustered, x int, y int);

Create Clustered Index T_i on T(x);

Also, you can always create secondary nonclustered indexes to improve efficiency, please look up indexing strategies articles (e.g. Books On-Line article http://msdn2.microsoft.com/en-us/library/ms189271.aspx), although I think for a linking table a single clustered index should be fine.

Now, for your particular issue I can't think of any benefit to an autonumber primary key except that if a lot of INSERT operations occur, and if the two other ID's are not naturally increasing (on average), then the autonumbering clustering key would keep the index from being fragmented and INSERT's would be quicker since you always insert at the end. Having said that, my gut feel is you shouldn't do it and you should keep the composite key. The composite key (if it's a primary key not just a clustereing key) also would enforce that no duplicate combinations would be inserted into the linking table.

|||

2nf just states that all data in a table must rely on the entire key.

that doesnt mean the key cant be composite.

if this table resolves a many to many relationship (Which sounds to be the case) then it is correctly designed for 2nd (even 3nf)

Friday, February 24, 2012

Completely Script SQL 2005 Database

Hi Everyone, what im looking for is a way or a freeware utility that will genereate a single SQL Script, for my entire db...

Just like phpMyAdmin does for MySQL...

it would need to script my tables, the data of those tables, and all my stored proc's.

Can SQL Server management studio do this? or does anyone know of a utility that will.

Thanks, Justinhave you tried the Generate Scripts Wizard in SSMS?

You can get to it off any database node in Object Explorer, under the Tasks flyout menu. Look for "Generate Scripts..."

I don't think it can script the data though. For that you might take a look at this addin to SSMS:

http://www.codeproject.com/useritems/enisey.asp|||If you know a little C# and want to try writing your own, SMO exposes pretty much every object in SQL Server, and each one has a Script() method which returns a StringCollection.

So it would be a pretty simple matter to write your own app to do this using SMO. I have done it actually. When I get a chance I'll post it on my site - link below. I'll try to get it posted this weekend.|||Just saw this post, looks like it may be of use to you:

http://www.dbforums.com/showthread.php?t=1610233|||Here's the app I was talking about above. It doesn't script all the data, just the objects. It generates a separate file for each object. I did it that way so it's easy to put a db under source control if it's not already.

http://www.elsasoft.org/tools.htm

Edit: I couldn't help myself. So I just added some code so that scriptdb.exe that will script out the data as well (optionally) using bcp.exe. :)|||Hi,

Thank you for that script, it worked a charm,

how ever i was looking for something that will script it to a .sql file... as in, INSERT INTO type stuff...

Cheers, Justin