19 Jan 2020

Practice tricks and tips with PowerBI & Model Coordination API

Follow @Xiaodong Liang

In the last blog, we introduced the sample: analyze clash data by PowerBI.  I’d like to share a couple of experiences when working with this sample, which might be useful for you when working with PowerBI.

1.    As shown in the diagram of the last blog, this sample will need to prepare the records of clash & object properties. Clash data is provided by Clash API of model coordination, but how to prepare object properties on server side? Although model derivative API can provide the properties of one model, it will return huge data with everything, and one document only per one call. Fortunately, model coordination API also provides Index API. quoted from the tutorial:

As part of the SVF conversion process performed by the model derivative service, property data from the original seed file is extracted to a PDB (Property Database) file. The model coordination indexing service uses these files to create an index which represents the merged property data from every model in the model set version.

In another word, we can extract the properties directly by Index API, with S3 query format in Amazon service! We can easily specify filter and which properties we need in the query result. It saved us a lot of time to get expected data, running at backend.

In this sample, our query is 

 "select s.file, s.db, s.docs, s.id, s.p153cb174 as name,s.p20d8441e as category, s.p30db51f9 as family, s.p13b6b3a0 as type from s3object s where count(s.docs)>0"

That is to ask API to tell us all records of object with its category, family and type.

2.    In the workflow, PowerBI embed container plays a key role on client side, while it actually a presenter with a PowerBI report. The report has been designed. In this sample, we prepared a report by web version of PowerBI: http://app.powerbi.com. We placed a bubble chart view from PowerBI market, and a table view.  In the environment, The two views are deployed with better layout.  

g

The embed container PowerBI simply loads the report and render the data in the browser. Updating the data of the report will refresh the views, and embed container view can also be refreshed accordingly. I would recommend play with some dummy records to see how the result would be, before publishing the template report. And the play-around tool of PowerBI Client API can be a good start for you to get familiar with embed container.

3.    In default, PowerBI API requires to the developer to work with 3 legged token which means user logging, and since the template report is defined by developer, it does not work to perform the logging process for developer when an end user tries to test the sample. Fortunately, it is possible to work with 2 legged token with developer’s account and password of PowerBI. It uses this endpoint with the for data as below:

POST: https://login.microsoftonline.com/common/oauth2/token

const body = {
        client_id: config.pbi.applicationId,
        scope: 'openid Dataset.ReadWrite.All',
        resource: config.pbi.resourceUrl,
        username: config.pbi.pbiUsername,
        password: config.pbi.pbiPassword,
        grant_type: 'password'
    }

While it is tricky to setup the authorization to grant the app with the permission. The separate readme introduces the detail steps on how to configure the granting, and also how to design a template report with dummy data. A script is provided for you to prepare the data for the report. 
 
4.    This sample assumes with a scenario that the user will check clashes of any two documents randomly, which means the checking times will be random. Originally, I uploaded the records of clash & object properties to CosmosDB of Azure, and linked CosmosDB with PowerBI. It is a kind of dataset by imported mode. By this way, once the records are refreshed, update records in CosmosDB , and simply call refresh. 

However PowerBI set limit to update report with data source per day. The maximun is 24 times even if with professional subscription of PowerBI. That is to say, even though the data source is updated, the report will not refresh anymore after the limit is hit. This is a good design. It is crucial to avoid constantly refreshing an import mode dataset. While in our sample, the frequency is not too high. The only is the times might exceed 24 times per day.

Fortunately again, PowerBI provides a couple of other modes of dataset, e.g. Stream Dataset and Push Data. Stream Dataset is typically for IoT scenarios with instant data update. Push Data doesn't contain a formal definition of a data source, so they don't require you to perform a data refresh in Power BI. You refresh them by pushing your data into the dataset through an external service or process. And the exciting is it has not limit of refresh! So in this sample, when the source data is update, it will delete the old data, and push the data to report again. The separate readme  introduces the detail steps. Check the help document of Microsoft for more details. The endpoint  and payload body are demoed as below.

POST: https://api.powerbi.com/v1.0/myorg/groups/'+ config.pbi.workspaceId + '/datasets

const body = {
        name: dataset_name, 
        tables: [
        {
            name: table_name, 
            columns: [
            //object category
            { name: "cat", dataType: "string"}, 
            //clash ids array
            { name: "clash", dataType: "string"},
            //clash counts (length of array above)
            { name: "clashcount", dataType: "Int64"},
            //name of the document which the object resides in 
            { name: "docname", dataType: "string"},
            //family name of the object
            { name: "fam", dataType: "string"}, 
            //document index id (string id)
            { name: "mid", dataType: "string"},
            //object name (same to that on model tree of Forge Viewer)
            { name: "name", dataType: "string"},
            //object type
            { name: "type", dataType: "string"},
            //object id in derivative (same to dbId in Forge Viewer)
            { name: "vid", dataType: "string"}  
            ]
        }
      ]  
    }

If you have any smarter way, please let us know.

5.    When a table view is clicked, PowerBI client API can tell the selected data, by which, we can extract information of clash and present them in Forge Viewer. However, if bubble chart view is selected, the same logic does not work. PowerBI only tells the identifiers of the selected bubbles, e.g. category in one document.  It does not tell the exact information of the clash (clash id). In order to get the corresponding summarized data from identifiers, we need to get the data of the report, and filter out by the identifiers.  
The steps are asynchronized.  

//bubble chart is selected
//get selected  identity
 const identities = data.dataPoints[0].identity;
//store the data in windows as the following steps are async process
 window.bubbleData = { identities: identities, report: _this._activeReport };

 (async () => {
           //get identities from the bubble data
          const identities = window.bubbleData.identities
          //get bubble report
          const report = window.bubbleData.report
          //get active  page of the report
          const pages = await report.getPages()
          const page = pages[0]
          //get all data behind the report
          const visuals = await page.getVisuals()
          const visual = visuals[0]
         //get summarized data (selected  data)
          const rawData = await visual.exportData(window['powerbi-client'].models.ExportDataType.Summarized);
          //parse the raw data with  the identities
          const jsondata = await this.parse(rawData.data)
          //get clash id groups
          const clashIds = await this.filter(jsondata, identities)
          //display the clashes in Forge Viewer
          global_forgeViewer.isolateClash(twoModels, clashIds)
  })();

Special thanks Jaime for the detail reviewing on the sample!

 

Tags:

Related Article