Trigger OnCreate Cloud Flow for Existing Rows (Part 2 of 8)
This is the second post in a series of Automating SharePoint Integration with Dataverse using Power Automate. You can check out the other posts via these links (Part 1, Part 3, Part 4, Part 5, Part 6, Part7, Part 8)
In my previous post, I have explained how to automatically create the SharePoint Document Location using the cloud flow when a new row is added to the Dataverse/Dynamics 365 table. But if you already have existing data in your system you might want to run the flow for those existing data to create the SharePoint Document Location.
In this post, you will learn about how to run the OnCreate cloud flows for the existing data without re-creating the on-demand flow. You will also learn about how to query the data which does not have the related child data (for this instance SharePoint Document Location) using FetchXML.
This process involves
- updating the trigger of the existing cloud flow
- from "OnCreate" to "OnCreate and OnUpdate"
- filter with one of the unused column to trigger the flow only OnUpdate of that particular column
- Use Bulk Data Updater in XrmToolBox to touch (update with the same value to) the column above
- by filtering the Contacts which do not have the related SharePoint Document Location
First of all, update the trigger by setting the Change type from "Added" to "Added or Modified" so that the flow will be triggered OnUpdate as well.
The next step is to update this unused column (creditonhold for this case) with the Bulk Data Updater in XrmToolBox so that the flow will be triggered for those Contacts. From the existing Contacts, SharePoint Document Location might have been created for some of the Contacts for which the users have opened the Documents associated view. To filter the Contacts which do not have the related SharePoint Document Location, you can use the following left outer join query FetchXML as mentioned in the PowerObjects blog.
<fetch> <entity name="contact" > <attribute name="fullname" /> <link-entity name="sharepointdocumentlocation" from="regardingobjectid" to="contactid" link-type="outer" /> <filter> <condition entityname="sharepointdocumentlocation" attribute="sharepointdocumentlocationid" operator="null" /> </filter> </entity> </fetch>
Thanks, Alex Shlega for pointing out an issue with the initial query. 😊
In the Bulk Data Updater tool,
1. Click on the Edit FetchXML button or FetchXML Builder button to specify the FetchXML
2. Select the unused column (creditonhold for this case) in the attribute to update
3. Select Touch for the action to update the column with the same value
4. Add the attribute
5. Update the unused column for all the Contacts based on the FetchXML
If there are a lot of Contacts in the system, triggering multiple cloud flows in a short time span would hit the service protection API limits. To avoid that, it's always a good idea to space out the bulk update with some delay in between. To do so,
6. Update the Wait between calls and
7. Batch size
to make sure that the number of requests does not exceed 6000 within the 5-minute sliding window. If the Wait between calls setting is longer than 120 seconds, the tool may stop working with the timeout error. In that case, update the Service timeout setting of the connection to the environment.
Optional Steps 2
To avoid accidental creation of the Contacts which already has the SharePoint Document Location, you can update the cloud flow from my previous post and add the steps to List the existing Document Location for the Contact and only create when there is no existing Document Location.
By using the Touch in the Bulk Data Updater tool to force setting the same value, we can run the OnCreate cloud flows for the existing data without re-creating the on-demand flow.
Post a Comment