Feeds:
Posts
Comments

Notes from: Erland Sommarskog on ‘Error and Transaction Handling in SQL Server’

XACT_ABORT

SQL Server cannot catch all possible errors like (CLOSED Connection) but it can catch most. So use XACT_ABORTand TRY CATCH together. If the SP does not have a TRY CATCH then only use XACT__ABORT.

SET XACT_ABORT, NOCOUNT ON

Use the above in SPs (15:35 & 24:00) because:

  • Catch any errors that cannot be caught by the TRY CATCH block.
  • Guarantee that if an error occurs in a nested transaction, it will abort the entire transaction.
  • Roll back the transaction when a “Timeout Expired” error occurs.
  • In SQL Server (unlike MySQL, Oracle, and Postgre SQL), if an inner stored procedure catches an error, the entire transaction, even from the outer stored procedure, is rolled back.

Savepoints

There is a concept of transaction SAVEPOINTS but ROLLBACK is only possible for some errors and not all (even if you do not use “XACT_ABORT”).
SAVEPOINT is also not supported in distributed transaction. Which means if a SP is called using a Lined Server.
e.g.
BEGIN TRANSACTION MyTran
First part
SAVE TRANSACTION Throw
Second part
ROLLBACK TRANSACTION Throw

Note Throw above is used as the name of the transaction and not the error-handling keyword Throw. This is why it is mandatory to use semi-colon when using the error handling ;THROW key word

Re-raise the error

  • Using THROW (RAISEERROR alternative) in the CATCH block start (it with a semi-colon) “;THROW”:
  • Use RAISEERROR only if SQL 2005/20008 is used. Or Want to log error with other custom behaiviours.
  • THROW is simple to use, original system error is preserved and it ensures execution is aborted.

Example

/*
	Author		:	Usama Ahmed
	Created On	:	20/09/2023 
	Description	:	
	History		:	
*/


CREATE OR ALTER PROCEDURE [dbo].[usp_ActionObject_Template]
	@ExistingSessionID [UNIQUEIDENTIFIER] = NULL -- optional if called from another SP
AS
SET XACT_ABORT, NOCOUNT ON
BEGIN	
	BEGIN TRANSACTION
	BEGIN TRY	
		----- Start Logging ------------------------------------------------------------------------
		DECLARE @LocalSessionID [UNIQUEIDENTIFIER] = ISNULL(@ExistingSessionID, NEWID())
			, @LocalProcessName [NVARCHAR](128) = OBJECT_NAME(@@PROCID)
		EXEC AuditLog.usp_LogProcessExecutionStart @SessionID = @LocalSessionID, 
													@ProcessName = @LocalProcessName    
		------------------------------------------------------------------------------------------------

		-- Main code for the SP
		
		----- Complete Logging ------------------------------------------------------------------------ 
		DECLARE @LocalTargetRowsUpdated INT = @@ROWCOUNT 

		DECLARE @LocalSourceRows INT = (SELECT COUNT(1) FROM [dbo].KPIMapping_Lookup)
		EXEC AuditLog.usp_LogProcessExecutionEnd @SessionID = @LocalSessionID,    
										 @ProcessName = @LocalProcessName, 
                                         @SourceRows = @LocalSourceRows, 
                                         @TargetRowsUpdated = @LocalTargetRowsUpdated 
		------------------------------------------------------------------------------------------------- 
		
		IF @@TRANCOUNT > 0  
			COMMIT TRANSACTION
	END TRY  
	BEGIN CATCH  
		DECLARE @LocalErrorCode INT = ERROR_NUMBER()
			, @LocalErrorProcessName NVARCHAR(128) = ERROR_PROCEDURE()
			, @LocalErrorLineNumber INT = ERROR_LINE() 
			, @LocalErrorDescription NVARCHAR(4000) = ERROR_MESSAGE() 
		
		SET @LocalErrorDescription = 'Error Code: ' + LTRIM(STR(@LocalErrorCode)) + ', Error Message: ' + @LocalErrorDescription

		IF @@TRANCOUNT > 0  
			ROLLBACK TRANSACTION  

		----- Error Logging ------------------------------------------------------------------------ 
		EXEC AuditLog.usp_LogProcessExecutionError 
			@SessionID = @LocalSessionID
			, @ErrorProcessName = @LocalErrorProcessName
			, @ErrorLineNumber = @LocalErrorLineNumber 
			, @ErrorDescription = @LocalErrorDescription
		-------------------------------------------------------------------------------------------------

		; THROW 
	END CATCH
END 
GO

Power BI Editor Shortcuts

Power BI – DAX Editor replace all selected text at once
Highlight the text then press >> Ctrl + Shift + L

Power BI Windows Emoji. Same functionality two different shortcuts:
Win + .

Win + ;

Here is the code in SQL format:

declare @t table ( f1 int identity , f2 varchar(100))

insert into @t
select ' hello'
union select 'world'

select * from @t

-- Table to XML
select (
    select 1 as f1r1, 2 as f1r2
    for XML path ('f1'), type
), (
    select 'hello' as f2r1, 'world' as f2r2
    for XML path ('f2'), type
)
for XML path (''), root('sample')

-- XML to table
declare @myXML xml
set @myXML = '
<sample>
    <f1>
        <f1r1>1</f1r1>
        <f1r2>2</f1r2>
    </f1>
    <f2>
        <f2r1>hello</f2r1>
        <f2r2>world</f2r2>
    </f2>
</sample>
'

select a.b.value('f1[1]/f1r1[1]','int') as f1,
    a.b.value('f1[1]/f1r2[1]','int') as f2,
    a.b.value('f2[1]/f2r1[1]','varchar(10)') as f3,
    a.b.value('f2[1]/f2r2[1]','varchar(10)') as f4
from @myXML.nodes('sample')a(b)

--read this as @myXML.nodes('[root]') [table name]([column name])

Please note that this is the unchanged SQL code you provided. Let me know if you need any modifications or further assistance.

This is what the XML row when clicked on looks like:

To architect Power BI in a proper way and produce the blueprint for a company a roadmap of some kind will need to be used. Just installing Power BI and letting it go wild is simply not right. For then this will for sure make it for the wild west and things will soon start to grow their own arms and legs.

Implementing Power BI is far more than just a Technology Project because there are many components to it like Where is the data, how will the data be trusted, how we certify or access the data, at what level we restrict a user base, who has access to publishing, do we use workspace or app so there’s a lot of things that interconnected.

I have followed Microsoft’s recommended Adoption Road Map to architect Power BI implementation. This included closely liaising with the directors, other architects, analysts, users and developers.

For e.g. the first step in the roadmap was about Data Culture which is huge –I started off with how things were being done within the company and how we could gradually move to the next level. Then I discussed the culture of being effective with the Power BI technology like not having a large # of duplicate datasets or significant downloads & exports. The appetite to become more data-driven could be achieved.  

The next steps in the roadmap were to look into the approaches that could be used and targeted the user base. I also looked into governance to focus on user empowerment within the requisite guardrails. And etc.

I’ve delivered documentation & recommendations on things like, what I just explained, and also on content ownership & delivery, COE, endorsement, architecture, etc. It was a huge success and appreciated throughout the company.

Quotes

Our governance principle says that we want to empower users within the requisite guardrails– Microsoft

Discipline at the core and flexibility at the edge – Microsoft

“If using your data isn’t your primary goal, then what the heck are you doing?” – Laura Madsen

Executive Sponsor – Essentially, you need “clout” if you’re going to challenge the status quo. – Melissa Coates

Data should be transformed as far upstream as possible, and as far downstream as necessary.

In this context “upstream” means closer to where the data is originally produced, and “downstream” means closer to where the data is consumed. – Matthew Roche

Advice is to create these with a Service account so that there is no dependency on a single user.

You would probably want to create a schedule that refreshes the data flow created and then whenever that finishes it becomes the trigger for Power BI report to be refreshed. To do that we will create Flow (previously known as Automate). Creating Flows uses the same program as in “Azure Logic Apps” so if you know one you would know another.

Start

Once you are logged into Power Apps then goto Flows

The first task would be to create a Flow and create a schedule for the data flow to be refreshed.

Start by click on the New Flow button and then followign the screenshots below:

 

 
 
Typing the name would give you suggestions that you can pick an operation from.

 
Select your flow and hit save and the first flow will be created. (the below image can be skipped so no worries if it doesn’t load)



 

Now onto creating the second flow to refresh the report:

Fill this up

These are the two flows used. Fill up the blanks and hit save

Next if you want you can turn the schedule off.

You may also want to share the flow with other users since this might have been created using a service account.

Data Flow – Mark as key

A few months back I posted that I could not find how to mark a column as a primary key in Power Query. Please check this out:

Power Query – Mark column as key

I have been working with PowerApps Data Flow lately and found this command in Data Flow called “Mark as key” which still as of 14/01/2022 does not exist in Power BI – Power Query although underneath the hood it uses the same command I’ve described in link above:

Snippet from M code in generated after clicking on the “Mark as key” button:

Check the difference in the ribbons out:

Power BI – Power Query (Mark as key missing):

PowerApps Data Flow – Power Query:

Microsoft recommendation

It is now a recommendation to use PowerApps Data Flow than Power BI Data Flow

I think the idea is when you go to use Data Flow as a source from Power BI you will still see the below but over time the Power BI dataflows will be deprecated:

Another advice is to create data flows in Service account rather than a personal account so that it is not dependent on a single user.

Here is how to locate it:

powerapps.com >> sign in >> Data >> Data Flow

You will also note that using the “Mark as key” also puts a key symbol on the column:

This key symbol disappears when you view the same column from Power BI – Power Query:

This may be because there is no actual use for that in Power BI. Since we are able to model in Power BI and implicitly specify which keys need to be used to create the relationships and that is all that Power BI needs to know for now.

Saying all these in this post we all know the power platform is rapidly evolving and changing so what implies today might be different tomorrow. Let’s see what tomorrow brings.

As of 18/08/2021 I have found no sites (except MSFT’s own site), articles or vides that shows how to mark a column as a key (primary key or foriegn key).

https://docs.microsoft.com/en-us/powerquery-m/table-addkey

Using help from the above I added the below step in M Code and it worked

MarkAsKey = Table.AddKey(#”PreviousStep” {“ColumnName”}, true)

Please also refer to a new post relating to this:

https://bit.ly/34PUMlE

We know tha we need to replace null with nothing if it’s text or to a number when it’s a number type.

Now we can easily replace the nulls when it is a number but sometimes when it is text you have both Null ad blanks.

so if we have an ID column detected as text and contains both NULLS ad BLANKS:

  1. Chage data type to number. This will make all blanks into NULLs
  2. Replace the nulls into whatever (I use -1 for N/A and -2 Unknown)
  3. Turn it to whichever data type youe want

SSAS DAX – Practice

The cube used

===============

I have an instance of Analysis Server in Tabular mode.

I have used SQL Server 2017 Dev and also downloaded “AdventureWorksDW2016CTP3.bak” from:

https://www.microsoft.com/en-us/download/details.aspx?id=49502

From the link below:

https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks-analysis-services

In the section:

AdventureWorks for Analysis Services

Downloaded the file:

adventure-works-tabular-model-1200-full-database-backup.zip

Then restored the below into the Analysis Server:

Adventure Works Internet Sales Database.abf

I have also downloaded the project file from:

Adventure-works-tabular-model-project-1200.zip

This would help to check the relationships:

Updated Data Source:

I have gone to the properties of “AdventureWorksDW2014” and updated the “Connection String” to point at “AdventureWorksDW2016CTP3” database.

Once I have done all the above I then opened “DAX Query”:

DAX Query

============

–# Query 1 – Select entire table

evaluate ( ‘Internet Sales’)

— evaluate – we can think of evaluate as SELECT in SQL

–# Query 2 – Sort data

evaluate ( ‘Internet Sales’ )
order by ‘Internet Sales'[Sales Order Number]

–# Query 3 – Select entire table with starting point

evaluate ( ‘Internet Sales’ )
order by ‘Internet Sales'[Sales Order Number]
start at “SO75120”

–# Query 4 – Select a field – use of VALUES

evaluate ( values (‘Internet Sales'[Sales Order Number]) )
order by ‘Internet Sales'[Sales Order Number]
start at “SO75120”

— If we hover the cursor over the VALUES keyword then it would display the descriptions as below:

–# Query 5 – Count Rows

evaluate ( ROW ( “CountMyRows” , COUNTROWS(‘Internet Sales’) ) )

— ROW – this is act on entire row. This is a grouping function

— “CountMyRows” is the name of the field we want to give

— COUNTROWS is the function that will count the rows

— ‘Internet Sales’ is the Table

–# Query 6 – Filters

evaluate ( filter (‘Internet Sales’, [Sales Order Number] = “SO75120”) )

— as the name suggest this acts like a WHERE clause in SQL

— SYNTAX — FILTER (Table/Dataset, Column = Condition)

–# Query 7 – Nested Filters

evaluate (
filter (
filter (‘Internet Sales’, [Sales Order Number] = “SO75120”)
, [Product Id] = 485
)
)

— Note – sinde [Product Id] is an integer we did not need the inverted commas

–# Query 8 – Operators

evaluate (
filter (‘Internet Sales’, [Sales Order Number] = “SO75120” && [Product Id] = 485 )
)

–# Query 9 – Grouping

evaluate (
SUMMARIZE(‘Internet Sales’, [Sales Order Number], “Max Product ID”, MAX (‘Internet Sales’ [Product Id]) )
)

— Can be read as – Find max Product Id grouped by Sales Order Number

–# Query 10 – Grouping multiple fields

evaluate (
SUMMARIZE
(
‘Internet Sales’, [Sales Order Number], [Customer Id], — mulitple group bys
“Max Product ID”, MAX (‘Internet Sales’ [Product Id]), — field 1
“Max Promotion ID”, MAX (‘Internet Sales’ [Promotion Id]) — — field 2
)
)

–# Query 11 – Joining Fields from Tables

evaluate (
ADDCOLUMNS( ‘Internet Sales’, “Customer Name”, RELATED (‘Customer'[First Name] ) )
)

— Bring the custmer name along with all the fields in ‘Internet Sales’

–# Query 12 – Joining can also be done using SUMMARIZE

— Other join functiomns : UNION, INTERSECT, EXCEPT, NATURALINNERJOIN, NATURALLEFTOUTERJOIN

evaluate (
SUMMARIZE( ‘Internet Sales’, [Product Id], [Promotion Id], ‘Customer'[First Name] )
)

–Query 13 – Navigating Parent – Child Data – SYNTAX

evaluate (
Filter (‘TableName’, PATHCONTAINS ([Path], “3”))
)
order by ‘TableName'[ParentKey], ‘TableName'[ChildKey]

MDX – Practice

The cube used

===============

I have an instance of Analysis Server in Multidimensional mode.

I have used SQL Server 2017 Dev and also downloaded “AdventureWorksDW2016CTP3.bak” from:

https://www.microsoft.com/en-us/download/details.aspx?id=49502

From the link below:

https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks-analysis-services

In the section:

AdventureWorks for Analysis Services

Downloaded the file:

adventure-works-multidimensional-model-full-database-backup.zip

Then restored the below into the Analysis Server:

AdventureWorksDW2014Multidimensional-EE.abf

Updated Data Source:

I have gone to the properties of “AdventureWorksDW2014” and updated the “Connection String” to point at “AdventureWorksDW2016CTP3” database.

I have also renamed the database to “ADW” from “AdventureWorksDW2014Multidimensional-EE”

Once I have done all the above I then opened “MDX Query” for ADW for the MDX scripts.

MDX Query

============

— if we do not mention any measure then by default then it uses the DEFAULT MEMBER / MEASURE of the cube:

select
from [Adventure Works]

— we get a value without any headers

— To know what the default measure is we need to put it on the COLUMN axis:

— An axis is one particular dimension of the cube

select [Measures].defaultmember
on columns
from [Adventure Works]

— Now we get at header called “Reseller Sales Amount” which is inside the Measure Group called “Reseller Sales” and this is the default measure of the cube.

— What are the base measures in a Measure Group:

select MeasureGroupMeasures(“Internet Sales”) on columns
from [Adventure Works]

— This will show all the 7 measures (NOT the calculated ones) in the Measure Group called “Internet Sales”

— Of Course you can always browse and see all this from the object pane in SQL Server.

— Explicit default measure

select
from [Adventure Works]
where [Measures].[Reseller Sales Amount]

— this where clause here is not used to filter (as used in T-SQL) but rather to Explicitly specify which measure to display

— We will not see the header this time because this is no on the columns axis but specified in the where clause.

— The below below is a total of all [Internet Sales Amount] data in the cube and does not display the header

select
from [Adventure Works]
where [Measures].[Internet Sales Amount]

— When we specify axis/dimension then we would be able to slice by those dimensions

— This on the other hand does display the header “Internet Sales Amount” since it’s on coloumn axis

select [Measures].[Internet Sales Amount]
on columns
from [Adventure Works]

— If we use a User hierarchy (Dimension Attributes) “on columns” axis

— then we see all the default measure value for each of the members along with its header

select [Product].[Product Categories].[Category]
on columns
from [Adventure Works]

— selecting both the parent and child node

— This Attribute is an hierarchy

— The “Property” – “AttributeAllMemberName” is “All Products” hence you see the break down as well as the value for “All Products”