So You Want to Deploy Power BI Project files (PBIPs)?

Have you heard the news about the new Power BI Project files? Okay, maybe not news anymore since it was announced over a year ago. Just in case you hadn’t heard, Microsoft is using a new format “payload” that is human readable (it’s json) instead of a binary format like the original .PBIX. This is great news for source control, you can now easily see the differences between versions, so you know exactly what changed.

This new “payload” format essentially “unzips” the contents of the pbix and stores it in an unzipped format. This payload consists of a .pbip file and one or more folders containing all the parts and pieces you need for your report and/or semantic model.

When it was announced there was a collective cheer from Power BI source control advocates heard ’round the world. Since it’s preview release, Microsoft has also added GIT integration with Fabric workspaces. This makes it so easy to incorporate source control for all (or almost all) of your Fabric artifacts, including Power BI.

But what happens when your organization already has a mature CI/CD process in place using Azure DevOps? Do you really want to break from that pattern and have it controlled somewhere else? That’s what this post is about, using Azure DevOps CI/CD pipelines to deploy your Power BI Project files (.pbip).

I’m going to share my experience in hopes that it will save you some time if this is the route you need to take.

Prerequisites

  • Power BI premium capacity workspace or Fabric workspace – For Power BI workspaces, this can be a PPU workspace or a dedicated capacity SKU; for Fabric workspaces, this can be any workspace backed by any F SKU
  • Azure DevOps Repo – Repository for your source code and pipelines
  • Service Principal – Used by the Azure DevOps pipeline to authenticate to the Power BI service, this account will also need at least contributor permission on the workspaces you are deploying to
  • Fabric PowerShell command-lets – Rui Romano at Microsoft has created these and made them publicly available via GitHub – they serve as a wrapper for the Fabric APIs
  • PowerShell 7.0 or higher – The Fabric PowerShell Command-lets require at least PowerShell 7.0 or higher
  • Power BI Desktop March 2024 or later – You will need this to create the Power BI project files

Decisions To Make

There are some decisions that will need to be made before you get started. These decisions should be carefully thought out before you proceed.

  • Will your organization be separating semantic models from reports, which is a best practice for encouraging semantic model reuse? This becomes important when thinking about how to structure your repo.
    • I chose to separate my semantic models from reports, to encourage semantic model reuse.
  • How will your organization structure your repo? Are you creating a separate repo for Power BI artifacts? What will the folder structure look like for Power BI items in your repo? This becomes important for scalability.
    • I chose to use a folder structure that had the deploy type (semantic model or report) at the top, followed by the name of the workspace. The path for semantic models would look something like <repo root>\Datasets\<semantic model workspace name>\<your pbip file/payload>. (I purposefully chose to use the word “datasets” instead of semantic models because you are limited to the number of characters in the path to 256, so saving characters where I can.) For reports, it would look something like <repo root>\Reports\<report workspace name>\<your pbip file/payload>.
  • Does your organization have the PowerShell skills? I’m going to assume yes, since your organization already has a mature CI/CD process in place using Azure DevOps. This will be important when it comes to building payloads for deploy.
    • Most of the PowerShell you will need is around the IO file system, but you will also need to be familiar with looping and conditional statements.

Creating the Pipelines

In Azure DevOps, you have pipeline pipelines (no, that is not a type-o) and release pipelines. This has always confused me, they are both pipelines, but “pipeline pipelines” just sound weird to me. My OCD brain needs something to distinguish them, so I call pipelines “build pipelines”. For release pipelines, well, my brain accepts “release pipelines”, so all good there. But I digress.

Build Pipeline

I used the build pipeline to build my payload of files needed for deploy based on the files that have changed since the last commit. Now you may be asking, why do you need to build a payload? We know what files changed, so what more do we need? Well, that’s where the PowerShell Fabric command-lets come in. You can either deploy a single item or you can deploy multiple items. The catch is the parameter for the item(s) to deploy is a folder, not a single file.

I did a bit of poking around in the command-lets code and discovered it’s deploying the .SemanticModel and/or .Report folder(s) when it calls the Fabric API. These folders are part of the “unzipped” payload of the Power BI Project and they contain all those parts and pieces that are need for your semantic model and/or report, so you have to deploy all those files/folders. But if you made a change that only affected one file in one of those folders, it won’t show up when you look at only the files that changed since the last commit. This is why you have to build a payload of files based on the file(s) that changed. This is where those PowerShell file system command-lets come in, along with looping and conditional statements. Once you have that payload of files, you need to put them in a place where your release pipeline can pick them up and proceed with the actual deploy.

Release Pipeline

I used the release pipeline to do the actual deploy of the files in the payload created by the build pipeline. This is where those PowerShell Fabric command-lets come into play. I used PowerShell again to inspect the payload to determine what parameters to pass to the command-lets, then did the deploy. Because I though carefully about how to structure my repo, I was able to easily deploy on a per workspace basis with a little bit of PowerShell looping. This ensures a very scalable solution. It doesn’t matter if I make changes to semantic models/reports in more than one workspace, if the changes are in the same commit, they all go, regardless of workspace.

Assumptions

I did make some assumptions when I created these pipelines,

  • This process will only to be used for Development build/release
    • Why am I mentioning this? Because there’s this pesky thing called connections. In the paradigm I am using, where we separate the semantic models from the reports (to encourage semantic model reuse), in development, I am assuming the connection to the semantic model in the report will not change in a development deploy. This means that whatever the connection is in the report, it will be the connection when it goes to the Power BI service.
  • Semantic models will already exist in the Power BI service that are used by reports
    • When you separate the semantic model from the report, when you create the report, the semantic model must already exist in the Power BI service in order to create that connection in the report. This means that you will need to check in/sync your local branch with the remote branch where your semantic model creation/changes live before you can create any reports that use those semantic models.
  • When deploying to any environment other than development, you will either have to use a different release pipeline that will modify the connection or modify your release pipeline to modify connections
    • There are options for editing the connection of a report/dataset. You can use the PowerShell Fabric command-lets to do this. The catch is that you need to have a really good naming convention in place to make this happen dynamically. (This is still on my to-do list, so I’m sure there will be another blog post coming once I get it done.)

I hope you found this post useful. These are things that I wish I had known before I started, so I thought they might be useful to others. I’m working on anonymizing my code so I can make it available via GitHub. Stay tuned for details.

How to Create a Time Table in Power BI Using DAX

Lately, I’ve been having to create items in Power BI, that I haven’t had to do for a while, this week it was a Time table. Of course, I don’t do this often enough, so I had to search the dark corners of my mind, eventually gave up and used Google. I am documenting this here for my poor memory, but I figured it could probably help others as well, as I had to use a couple of articles to jog my memory. Now it’s all in one place for future reference. You’re welcome 😁

There are some instances when you want to analyze data over time, not just dates. Most of us are familiar with having to create date tables and use them in analysis, but having to analyze data over time is not as common. Let’s say you run a taxi company and you want to determine when your busiest times of day are. This would come in handy for scheduling drivers. You need more drivers during busy times because no one wants to wait for a taxi!

My example creates a time table down to the minute. You can definitely go more granular, but then you will end up with a time table with 86,400 rows (24 hrs * 60 mins * 60 secs) if you go down to the second. My time table will have 1440 rows (24 hrs * 60 mins).

Some might ask about creating this in Power Query using M. That is a perfectly valid approach, and I encourage it when you can, but you can’t always do that based on your data model data sources and storage modes, which is what I ran into this week. If you want to use Power Query to create your time table, check out this video by Wyn Hopkins on YouTube. He does a great job explaining it.

Now, on to the task at hand. Creating a time table using DAX. I do want to say that in order to use this time table for analysis, your data will need to have a column that can join to the time table. Because I am going down to the minute level, your data will have to have data at the minute level as well.

Here is the high level list of steps to create the time table, if this is enough for you, then you can stop reading now. However, if you want the details of each step, keep reading. I will also make all the DAX available via my github repo.

  1. Create Transaction Time table
  2. Add Time to the Minute column
  3. Change Time to the Minute column data type to Time
  4. Add time slot column
  5. Change time slot column data type to Time
  6. Repeat steps 4 & 5 until you have all the time slots you want

Step 1 – Create Transaction Time Table

In your Power BI Desktop file, if you switch to the Data view tab on the left side, you will see the menu changes. From this new menu, select New table. You will be prompted to enter some DAX. I am using the GenerateSeries() function. The DAX for this step is

Transaction Time = GENERATESERIES(0, 1439, 1)

Now change the default column name to Minute of the Day. You’ll see a column of integers from 0 to 1439 for a total of 1,440 rows.

Step 2 – Add Time to the Minute column

Now we need to turn the Minute of the Day column to an actual time, so we need to add a new column, Time to the Minute. The DAX for this uses a formula to convert the column Minute of the Day to a time value. The Time function takes 3 parameters, Hour, Minute, and Second, respectively. For the Hour, we use the FLOOR function to get the hour of the day by dividing the Minute of the Day by 60 (60 minutes in an hour). For the Minute, we use the MOD function to get the remainder of minutes when we divide by 60 (again, 60 minutes in an hour). For the Second, we use 0 since we are not going down to that granularity.

The DAX for this column is (apologies for the hideously formatted DAX)

Time to the Minute =
TIMEVALUE(
TIME(
FLOOR(‘Transaction Time'[Minute of the Day]/60, 1),
MOD(‘Transaction Time'[Minute of the Day], 60),
0
)
)

Step 3 – Change Time to the Minute column data type to Time

By default, our new Time to the Minute column was added as datetime data type. Not only that but the default date is 12/30/1899 – Yikes! We need to convert it to a Time data type. From the Data type dropdown, select Time.

and you can see that the date is no longer part of the column, we only have a time value now.

Step 4 – Add time slot column

Now we need to create the time slot column(s) that we are going to use in our model. I’ll start with my generic DAX pattern of X minutes, then replace X with my value. Add a new column and use the following DAX for the column

X Minute Slot = FLOOR(‘Transaction Time'[Minute of the Day]/X, 1) * X/1440

I am going to need a 5 minute time slot, so I will replace all the X values with 5,

Step 5 – Change time slot column data type to Time

You’ll notice that we now have a decimal value for our 5 Minute Slot column. That’s not very helpful, so we need to change the data type to Time from the Data type dropdown.

You will get a prompt about Data type change. You will need to click Yes to change your new column to a Time data type.

And now we have a lovely column of time values. Notice how the values repeat for every 5 rows, then change to the next time. This is because we used FLOOR function.

Step 6 – Repeat steps 4 & 5 until you have all the time slots you want

Now repeat steps 4 & 5 until you have all the time slot columns you want. Here is my time table with the final time slot columns.

I have 5, 10, 15, 30, and 60 minute slot columns.

If you want to go the extra mile, you can create a hierarchy for the time slots, this is what mine looks like.

I know what you’re thinking, “How do I use this in a visual?”, well, I used a line chart to track the pickup of my taxis. It looks like this.

You can use the hierarchy to allow for better analysis, my only suggestion is to use the “go to next level” of drill down (the 2 arrows pointing down) instead of “expand to next level” (the forked down arrow) for a better experience.

That’s it, you now have a Time table and a hierarchy for better analysis over time.

Options for Data Source Credentials When A Service Principal Owns A Power BI Dataset

In today’s world of wanting to automate everything, specifically, automating your CI/CD process for your Power BI datasets and dataset refreshes, you need to understand your options when it comes to the credentials you can use for your data sources.

If we are using enterprise-wide datasets, we don’t want Power BI datasets owned by individuals; we want them to be owned by a Service Principal so they aren’t relying on specific individuals when things go sideways (and because we all want to go on vacation at some point). However, it’s not always clear on what credentials will actually be used for our data sources in our datasets when using a Service Principal. In a previous post, I talked about how to set up a service principal to take over a dataset when using data gateways, but one of the pre-requisites I listed was that your data sources needed to be configured with appropriate credentials. That’s where this post comes in.

You essentially have three options for data source credentials, depending on your data source type.

  1. Basic Authentication
  2. Active Directory/Azure Active Directory Authentication
  3. Service Principal

This post will help you understand these options and the pros/cons of each.

Basic Authentication

If you are using an on-prem data source like SQL Server or Oracle, basic authentication means you have a username and password that only exists within this data source and it’s up to the database engine to authenticate the user. In SQL Server it’s called SQL Authentication and in Oracle it’s called Local Authentication.

Pros

  1. Super easy to set up
  2. All your security is contained within the database itself
  3. Almost all applications can use basic authentication

Cons

  1. Passwords tend to get passed around like a penny in a fountain
  2. On the opposite end of the spectrum from above, the password is sometimes tribal knowledge and not recorded anywhere, so folks are afraid to change the password for fear of breaking something
  3. Maintenance can be a nightmare, it’s yet another stop on the “disable access” checklist when someone leaves a company

Active Directory/Azure Active Directory

Active directory (on-prem based) or Azure active directory (cloud based) is sometimes referred to as Windows Authentication, because this type of credential is needed to log into a machine, whether it be a laptop, desktop, server, or environment like a network, and it exists outside of the database.

Pros

  1. Usually a bit more secure, since accounts are usually associated with an actual person, so passwords aren’t passed around
  2. Usually requires interactivity (see next Pro)
  3. A “Service Account” can be created that is not associated with an actual person
  4. Can be added to Active directory/Azure active directory security groups

Cons

  1. Usually requires interactivity
  2. Not supported by all applications, but it is supported in Power BI

Service Principal

This is by far the most secure authentication method. Service Principals are created as “app registrations” in Azure Active Directory, and by nature they are not interactive.

Pros

  1. Most secure out of all methods listed
  2. Require “tokens” to access applications
  3. Allow you to go on vacation

Cons

  1. Can be difficult to setup/configure
  2. In most applications, Power BI included, the tokens have a very small window when they are valid (like, just an hour), which is excellent from a security perspective, but bad from an automation perspective

Summary

Which would I use? Well, it depends. What are my client’s security requirements? Is Basic Authentication even an option? Some organizations have this disabled for their on-prem systems. If I go with Active Directory/Azure Active Directory, I would most likely use a “service account”, (where the password is stored in Key Vault) then I would use a PowerShell script to assign the credentials to the data source. Lastly there’s the Service Principal. My use of this would depend on how/when I am refreshing the dataset. If it’s at the end of an ETL/ELT process that can call PowerShell scripts and I know the dataset refresh time is less than an hour, then I would definitely use this authentication method with an additional call to get a fresh token just prior to issuing a dataset refresh. It can be difficult to choose which authentication method is best for you, but hopefully this post has helped at least a little bit.

Steps to Have a Service Principal Take Over a Dataset in Power BI When Using Data Gateways

A little background for those new to using Power BI and Data Gateways. If the data source for your Power BI dataset lives on-prem or behind a private endpoint, you will need a Data Gateway to access the data. If you want to keep your data fresh (either using Direct Query or Import mode), but don’t want to rely on a specific user’s credentials (because we all want to go on vacation at some point), you will need to use a service principal for authentication.

The title of this post is something I have to do on a not so regular basis, so I always have to look it up because I inevitably forget a step. I decided to create a post about it, so I don’t have to look through pages of handwritten notes (yes, I still take handwritten notes!) or use my search engine of choice to jog my memory.

  1. Add Service Principal as a user of the data source(s) in Data Gateway – this can be done in the Power BI service
  2. Add Service Principal as an Administrator of the Data Gateway – this can be done in the Power BI service
  3. Make Service Principal the owner of the dataset – this must be done via PowerShell
  4. Bind the dataset to the Data Gateway data source(s) – this must be done via PowerShell

These are the high-level steps. If this is enough to get you started, you can stop reading now, but if you need more details for any step, keep reading.

Here are some prerequisites that I do not cover in this post. But I do provide some helpful links to get you started if needed.

  1. Power BI Premium workspace (currently Service Principals only work with Power BI Premium or Embedded SKUs)
  2. Have a Service Principal created and added to an Entra ID (f.k.a., Azure Active Directoy) Security Group
  3. Azure Key Vault – because we DON’T want to hard code sensitive values in our PowerShell scripts
  4. Have a Data Gateway installed and configured in your Power BI tenant
  5. The Power BI Tenant Setting, Allow service principals to user Power BI APIs, must be enabled and the security group mentioned above must be specified in the list of specific security groups
  6. The Power BI Tenant Setting, Allow service principals to use read-only admin APIs, must be enabled and the security group mentioned above must be specified in the list of specific security groups
  7. The data source(s) used for the dataset must already be added to the data gateway
  8. The following PowerShell Modules installed: MicrosoftPowerBIMgmt, Az. If you need help getting started with PowerShell, Martin Schoombee has a great post to get you started.

This might seem like a LOT of prerequisites, and it is, but this scenario is typical in large enterprise environments. Now, on to the details for each step.

In my environment I have a service principal called Power-BI-Service-Principal-Demo that has been added to the security group called Power BI Apps. The Power BI Apps security group has been added to the tenant settings specified above.

Step 1 – Add Service Principal as a user of data source(s) in Data Gateway

This step requires no PowerShell! You can do this easily via the Power BI Service. Start by opening the Manage connections and gateways link from the Settings in the Power BI service.

You will be presented with the Data (preview) window. Click on the ellipses for your data source and select Manage Users from the menu.

Search for your security group name (Power BI Apps for me) in the search box, then add it with the User permission on the right side. Click the Share button at the bottom to save your changes.

That’s it for step 1, super easy!

Step 2 – Add Service Principal as Administrator of Data Gateway

This step requires no PowerShell! This wasn’t always true, but it is now! You can do this easily via the Power BI Service. Start by opening the Manage connections and gateways link from the Settings in the Power BI service just like you did in Step 1.

You will be presented with the Data (preview) window. Click on the On-Premises data gateways tab. Click on the ellipses for your gateway and select Manage Users from the menu.

Search for your security group name in the search box, then add it with the Admin permission on the right side. Click the Share button at the bottom to save your changes.

That’s it for Step 2.

Step 3 – Make Service Principal the owner of the dataset

In order for your dataset to be independent of a specific user’s credentials, we need to have the Service Principal take over ownership of the dataset. Normally taking over as owner of a dataset is a simple thing to do in the Power BI service, however it’s not so simple for the Service Principal. The reason for this is because in order to use the Take over button in the dataset settings, you must be logged in to the Power BI service and Service Principals cannot log into the Power BI service interactively, that’s the whole point. So, we must use PowerShell to make this happen. I have created a PowerShell script to do this and I do in combination with Step 4, below.

Step 4 – Bind the dataset to the Data Gateway data source(s)

There is no interface in the Power BI service that allows users to bind datasets that are owned by Service Principals to Data Gateway data sources. So, you guessed it (or you read short list of steps above), you have to use PowerShell to do it. I have combined Steps 3 and 4 into a single PowerShell script, which you can download from my GitHub repo. My PowerShell scripts assume that you have secrets in your Key Vault for the following values.

  • Service Principal App ID
  • Service Principal Secret Value
  • Service Principal Object ID
  • Power BI Gateway Cluster ID

If you don’t have the secrets, you can always hard code your values in the scripts, though I wouldn’t recommend it. Those are sensitive values, which is why we store them in Key Vault. If you are unsure about how to get any of these values, this post should help you out for the Service Principal values and you can get your Power BI Gateway Cluster ID from the Data (preview) screen accessed by Manage connections and gateways menu option. It’s not super obvious, but you can click the little “i” in a circle for your gateway to get your Cluster ID.

In addition to these key vault values, you will also need

  • DatasetID
  • WorkspaceID
  • Name of your Key Vault
  • Your Azure tenant ID
  • Your subscription ID where your Key Vault resides

You will also need the data source ID(s) from the Data Gateway. Lucky for you I created a script that will get a list of those for you. You’re welcome. The GetGatewayDatasources.ps1 script will return a json payload, the ID of your data source is in the id node. Be sure to pick the correct entry based on the name node.

You are now ready to use the PowerShell script, TakeOverDatasetAndAssignSPtoGatewayDataSource.ps1, to finish off Steps 3 and 4. Here is a screenshot of the PowerShell code, you can download a copy of the code from my GitHub Repo. You need to provide the parameters based on the list above, modify values you use for your secret names in Key Vault, and provide your Gateway data source ID(s) and you are all set.

I couldn’t have done this without the help of these resources. I have essentially combined them in this post to make it easier for me to remember what I need to do.

I hope this was helpful.

Speaking at Live! 360

I am super excited to announce that I have been selected to speak at SQL Server Live! in Orlando in November. I have selected to present two sessions, which are completely new.

I will be presenting Power BI Data Wrangling – Choosing the Best Feature for the Job and Getting Started with Governance for Your Power BI Estate. Both are currently scheduled for Tuesday, 14-November-2023, though schedules are always subject to change, so be sure to double check the schedule as the event gets closer.

While this is super exciting, it’s also bittersweet. This will be the first time in over 15 years that I have not attended PASS Data Community Summit, because Live! 360 runs the same week. It was a tough decision for me, but due to my recent physical limitations, I decided to opt for Live! 360 because travel will be significantly easier (and honestly, the weather in Orlando in November is much better than Seattle in November).

Please stop by and say, “Hi” if you’re in Orlando, I’d love see you.

No In-Person SQLBits For Me

I am saddened to report that I will not be able to attend SQLBits in person this year as originally planned. I was involved in a terrible car accident on 9-February, that has left me (hopefully) temporarily immobile and unable to travel for 3-6 months. I am on the road to recovery, but it will be a very long road.

This makes me sad for so many reasons. I will miss:

    1. All the hugs from my #SQLFamily
    2. All those smiling faces of delegates
    3. The excitement that comes with travel
    4. The anticipation of presenting live and in-person
    5. Pub Quiz night
    6. Fancy dress party night
    7. Catching up with all my #SQLFamily
    8. Impromptu conversations over hot chocolate or beers
    9. Meeting new people
    10. Stroopwaffles
    11. Austrian Chocolates
    12. Australian Chocolates
    13. Hanging out in the Community Corner
    14. My fellow Bits Buddies
    15. Did I mention Stroopwaffles?
    16. And so many more

    The most excellent organizers of SQLBits have been amazing and have accommodated my request to present remotely on very short notice, so I will still get to present my session, it will just be from my bed, instead of in-person. While I will miss the (famous) Fancy Dress party, I will be wearing my costume during my session, but you have to attend my session to see it (not even going to give any hints as to what it is!) There may even be a prize for who guesses correctly.

    My session is Identifying and Preventing Unauthorized Power BI Gateways. It’s a 20-minute session, check the agenda for the most accurate date and time.

    Power BI Learning Opportunity at SQLBits

    If you’ve been thinking about learning Power BI, I have a wonderful opportunity for you. I will be presenting, along with my friend and colleague Michael Johnson (Blog | Twitter), a full day of training at SQLBits on 8-March-2022. Our Training Day session is called Zero to Dashboard.

    Our session assumes you have no knowledge of Power BI, so if this is your first encounter with Power BI, no worries, we’ve got you covered. We will cover the Power BI ecosystem, talk about the importance of data cleansing and data modeling, introduce visualization best practices, and review governance considerations. We reinforce all these concepts through hands on labs that we go through as a group. By the end of the day, you will be able to create a dashboard. If you are one of those folks who need to do things multiple times before they “stick” (like me), you will walk away with the lab manual used in class so you can go through the labs again to help solidify what you have learned.

    SQLBits is a hybrid event this year, so if you cannot attend in person, no worries, you can attend virtually as well. If you are interested in attending, there are still registration slots available, but seats are limited, so don’t wait to long to register.

    Michael and I hope to see you there.

    Unable to Validate Source Query in Tabular Editor

    I recently encountered the error, “Unable to validate source query” when trying to refresh the metadata for the tables in my tabular model using Tabular Editor. I immediately googled that at came up with a great post by Koen Verbeeck (Blog | Twitter). I had never seen this error before and since my metadata refreshes had been working flawlessly for weeks, I was so excited when I found this post.

    Long story short, this post did not help me. I tried everything suggested, I ran my partition queries wrapped in SET FMTONLY ON and they came back instantaneously in SSMS. I added the TabularEditor_AddFalseWhereClause annotation from this thread. Neither worked. So wasn’t quite sure what was going on.

    My last-ditch effort was to add a new table to my model to see if I was even getting a successful connection to my data source. I was prompted for the password, which it had not done before when adding new tables or refreshing table metadata (for weeks). I was using a legacy data source (Azure SQL Database) w/ SQL Server Authentication. Once I supplied the password, I could see a list of available objects in my database. I cancelled out of the new tables dialog and clicked Refresh Table Metadata and winner-winner chicken dinner, no more “Unable to validate source query” error. Turns out my password “mysteriously disappeared” from my connection string.

    The moral of the story is: It’s not always zebras when you hear hoofbeats, sometimes it is horses.

    Hopefully, this post will help someone else waste significantly less time than I did on fixing this error.

    Using Power BI To Track My Activities

    As a MS MVP one of the things you have to do is keep track of all the “things” you do for community, whether it be volunteering, organizing, speaking, etc.  It can be a bit daunting trying to keep track of all of it.  But hey, I’m a Data Platform MVP, how hard can it be to keep track of data?!  Queue music from one of my favorite Blake Edwards movie .. Pink Panther.

    At first I was just keeping track of everything in a text file via Notepad.  That got very unmanageable very quickly with all the different kinds of things I was doing.  I migrated all my data to a spreadsheet, because we all know that Excel is the most popular database in the world, right?

    I knew that I had been busy in 2018, but I had no idea until I used Power BI to look at my data.  Yes, I was significantly busier in 2018 than I ever had been and 2019 is shaping up to be just the same if not busier.

    Take a look at what I created.  It was a fun project to work on and allowed me to explore some things in Power BI that I don’t work with on a regular basis.  Let me know what you think.

    How To Use Power BI Embedded For Your Customers

    Recently I had a need to build a Power BI report for a client.  This client has a multi-tenant database and their own custom web app.  They want to show each client their data without showing any other clients’ data. They also don’t want to require their customers have a Power BI license.  Easy enough, just use Power BI Embedded with the App Owns Data model and set up Roles (row-level security) in the Power BI Desktop file, should take less than five minutes to set up.  Well, we all know it took longer than five minutes or this blog post wouldn’t exist.

    There are some great tutorials from Microsoft out there, they even provide a sample app on Github that you can download if you are not a web developer (I am so NOT a web developer!) to help you get started.  There are some great blog posts about it too, one from Reza Rad and one from Kasper de Jonge.  So why I am writing yet another blog post about it?  Because the devil is in the details, and I completely missed the details which means someone else must have as well.

    I don’t want to repeat what others have already written, so go ahead, go read their posts, it’s okay I’ll wait here.

    Now that you familiar with Row Level Security in Power BI, how do you make it work when you want to pass in your customer’s identifier because your customers don’t have Power BI accounts?  It seems like the only way to make dynamic row level security is to use the Username() DAX function?  But wait, doesn’t that require the user to have a Power BI account?  Sigh, it seems we are going in circles.

    The one thing these articles don’t talk about is that when you are using Power BI Embedded, you can pass in whatever you like for the EffectiveIdentity via the Power BI API and it will “overwrite” the Username() function.  What?!  That’s right, it will completely ignore the Username() function and use whatever you give it.  WooHoo!

    Let’s see an example, so it’s crystal clear.  I have a small sample of data that has some sales.

    Embedded-Sample-Relationships

    Let’s say I want to limit the view of the data by Region.  Here are my available regions.

    Embedded-Sample-Regions

    I would need to create a role based on Region that looks like this

    Embedded-Sample-ManageRoles

    Notice that I am using the Username() Dax function for Region.  I know, I know, my customers aren’t Power BI users, so this makes no sense, why would I use this?  Like I said earlier, the value that you pass via the Web API as the EffectiveIdentity will overwrite the Username() function.

    Now that I have my Roles set in my Power BI Desktop file, I can publish my file to the Power BI service.  All that’s left to do is to add members to my role.  Now when using Power BI Embedded in this scenario, you only need to add one account to the role that was just created.  Just keep in mind that this account must be have a Power BI Pro license.

    Navigate to the Security for the dataset.

    Embedded-Sample-Navigate-to-Security

    Add your user to the role

    Embedded-Sample-Add-Account-To-Role

    It should look like this now

    Embedded-Sample-Final-Role

    Now, let’s look at the sample web app and how this will work.  In the homecontroller.cs there is a call to EmbedReport which contains the creation of the EffectiveIdentity, this is where the magic happens.

    Embedded-Sample-Effective-Identity

    Let’s take a look at how this works from the front end.  Here’s my report with no security.

    Embedded-Sample-No-Security

    Now I just click the checkbox View as a different user, enter my values then click Reload.

    Embedded-Sample-Filtered-Result

    Presto change-o, the data is filtered for just the Midwest Region.  But wait, Midwest isn’t a Power BI user, so how does that work?  Again, the EffectiveIdentity overwrites the Username() function and applies whatever you have entered.  It’s magic, I’m telling you.

    One thing that tripped me up was that I wanted to filter by RegionID, not region name.  When we work with data, we want to be as precise as possible so we typically use a primary key (or other unique identifier) to filter our data.  However I have discovered through trial and error that you cannot pass a number, you have to pass character data in order for the filter to work.  This means for your data you may need to ensure that there is some unique character data field to identify your customers other than a numeric ID field.

    Oh, and if you want to play around with this you can use the sample app from Github and this Power BI Desktop file.

    I hope this was helpful.

    Update: 2-Feb-2019

    I was asked an interesting question via Twitter by Dave Ruijter about converting data to string so you could essentially still use the numeric ID field.

    Instead I added a computed column to my dataset that uses the FORMAT() function to convert the existing numeric ID field to a string.

    Embedded-Sample-New-Computed-Column

    Then I based my role on that column, still using the Username() function.

    Embedded-Sample-Using-RegionID-Role

    Now I can filter based on the ID field, WooHoo!

    Embedded-Sample-Final-Using-RegionID

    Thanks Dave Ruijter for making me think about this a little bit more.