06. Frequently used Nodes
Before we dive in to more sophisticated scenarios, we'll take some time to get familiar with some of the Nodes that are used in most Workflows. These are Nodes that sit directly before or after the Connector Nodes because they prepare data or evaluate the result of an action.
While there isn't a graded exercise for this section, it's recommended that you follow along to gain familiarity with them.
We also recommend that you review Nodes you should know so you have a general awareness of the full set of frequently used Nodes.
If Node
Let's begin by recapping the If
Node which is the most frequently used flow control Node.
Follow these steps in a new Workflow.
Add
If
andVariable Bar
.Connect
Start.RunNow → If
.Add Custom Property
Variable Bar.animal
as an output, setVariable Bar.animal
tocat
.Connect
Variable Var.animal → If.Value
.Set
If.Expression
toValue = "cat"
.Run the Workflow and you'll see the Workflow Log for the
If
Node will show that it fired→ True
.Set
Variable Bar.animal
todog
'.Run the Workflow again to see the the
If
Node fired→ False
.
Review the If help article for more information about this Node.
Choose Node
In some cases you may want to branch control in more than one of two ways - the Choose
Node is a good way to do this.
Follow these steps in a new Workflow.
Replace the
If
Node with aChoose
Node.Connect
Variable Bar.animal → Choose.Expression
.Click the
+
button on theChoose
Node to add an Execution Output to define an entry forcat
and another fordog
.Unlike other Nodes, the
Choose
Node supports custom Execution Outputs. These allow you to conditionally affect the flow control of the Workflow.Run the Workflow and you'll see the Workflow Log for the
Choose
Node fired→ dog
.
Review the Choose help article for more information about this Node.
Error Node
The Error
node, which we looked at earlier, allows you to throw a custom error. You can use this to simplify technical errors or fire based on data validation failures or other internal conditions.
Review the Error help article for more information about this Node.
For Each Node
The For Each
Node enables iteration of a set of records in a document. This is useful for cases where subsequent steps in a Workflow need to work with individual records or smaller numbers of records.
Follow these steps in a new Workflow.
Add
For Each
.Set
For Each.SourceDocument
to the value below:{ "order": [ { "orderId": 123, "customer": "abc" }, { "orderId": 456, "customer": "def" }, { "orderId": 789, "customer": "ghi" } ] }
Focus the
For Each.Path
(i.e. click the Property value) and a picker will display allowing you to select the element in the document you want to use to split items out. ClickorderId
(just underorder
).On the Node Header, click
v → Run from this Node
.In the Workflow Logs, you'll see the
For Each
Node fire the→ Item
Output three times, followed by the→ Finished
Output.Under the
Item
Property, you'll see the individual order id values emitted, one per row.Focus
For Each.Path
Property value again and this time select theorder
Property.This time when you run the Workflow, you'll see just that the entire
order
element is emitted into theItem
Property.By adjusting the element that is selected in
Path
, theFor Each
Node can emit either entire records or values for specific fields within a document.In some cases, you may have a large number of records that need to be split into smaller chunks rather than individual records. The
ChunkSize
Property allows you to control this behavior.Set
For Each.ChunkSize
to2
and setFor Each.Encapsulation
toParentNode
.Encapsulation
controls how we wrap the matched elements. Since the document can only have one top-level element, we need to setEncapsulation
whenChunkSize
is more than one.When you run the Workflow now, you'll see there are now only two Workflow Logs because the first two order records are merged into the same iteration.
This technique can be used for cases where a target system has a limit to the number of records it can process at once.
Review the For Each help article for more information about this Node.
Loop Node
The Loop
Node provides iteration over a sequence of integers.
Follow these steps in a new Workflow.
Add
Loop
,If
andLoop Exit
.Set
Loop.Group
toloop1
,Loop.Start
to1
,Loop.Stop
to5
andLoop.Increment
to1
.Connect
Start.RunNow → Loop
.When you run the Workflow, you'll see five Workflow Logs that fired
→ Loop
along with a final Log for→ Finished
.You can exit a loop before it completes by using the
Loop Exit
Node.Connect
Loop.Current → If.Value
.Set
If.Expression
toValue = 3
.Set
Loop Exit.Group
toloop1
.Connect
Loop.Loop → If
andIf.True → Loop Exit
.When you run the Workflow, the conditional statement should cause the
Loop Exit
Node to invoke on the third iteration and theLoop
Node should only fire→ Finished
instead of the next iteration.
Review the Loop help article for more information about this Node.
Formatter Node
Follow these steps in a new Workflow.
The Formatter
Node allows string (text) data to be prepared by translating Property values into a templated string.
Add
Formatter
.Set
Formatter.Expression
toHello, {object}
.Add
Formatter.object
and set its value toworld
.Run the Workflow to see the text
Hello, world
emitted from theResult
Property.
Review the Formatter help article for more information about this Node.
Escaping and Injection Attacks
The Formatter
Node supports some common types of escaping (see the Escaping
Property). However, care should be exercised even when escaping untrusted input data.
Consider an example where you are querying a SQL database for a company based on its name. An example query template is shown below:
SELECT * FROM company WHERE name like '{name}'
If the Formatter
Node is used to replace the name
Property out with a filter a user has provided, the user could provide a value like %'; DROP TABLE company; --
causing the full SQL statement to resolve to:
SELECT * FROM company WHERE name like '%'; DROP TABLE company; --'
This is known as an injection attack. In the example above, we considered SQL but this type of attack can be applied to almost any type of service.
Take these steps to ensure your solutions are not vulnerable to this type of attack:
Be aware of cases when user-provided data is being processed and where possible filter or sanitize it before using it to query data sources.
Wherever possible allow the appropriate connector to handle the concern of translating parameters into a query. For example, our SQL Query Connectors handle this without using string translation and therefore are not vulnerable to this problem.
If there is no other option, use the
Formatter
but ensure you are using the appropriateEscaping
option and test some adversarial cases (i.e. data that would be problematic if it wasn't escaped correctly).
See Avoiding Sql Injection Attacks for more information.
JSON Convert Node
You may encounter cases where you need to provide or accept data externally in a certain format. In other cases, certain Nodes may only be able to accept or emit data in a certain format. For these scenarios, JSON Convert support conversion of data between XML and JSON.
Follow these steps in a new Workflow.
Add
JSON Convert
.Set
JSON Convert.Json
to the value below:{ "order": [ { "orderId": 123, "customer": "abc" } ] }
Run the Workflow to see the text XML representation of the above JSON in the
JSON Convert
Workflow Log.Copy the XML from the Workflow Log entry and paste it into
JSON Convert.Xml
.Set
JSON Convert.Action
toXmlToJson
.Run the Workflow again. Note that the "order" object is no longer considered an array (i.e. the order isn't wrapped in braces). This is because in XML, when there is only one item in an parent element, it's not possible to determine whether the parent should be treated as an array container. This issue and a way to work around it is discussed in the JSON Convert help article.
Review the JSON Convert help article for more information about this Node.
Reduce Node
An important concept in app integration is the ability to exclude data that has not changed since it was last processed.
Often, it's not possible to precisely query a data source for the required delta data because the filters you need aren't supported in the third party API. For example, if you sync customers daily but the source doesn't allow you to filter for customers changed after a certain date, you'll want to discard records that haven't changed early in the Workflow so that you aren't unnecessarily wasting resources processing unchanged data.
One way of doing this is by using the Reduce Node which maintains a hash of records that it has previously encountered and is then able to remove unchanged records before continuing with the next stage of the Workflow.
Follow these steps in a new Workflow.
Add
Reduce 2
.Set
Reduce 2.Group
tocontacts
, SetReduce 2.SourceDocument
to the value below:{ "order": [ { "orderId": 123, "customer": "abc" }, { "orderId": 456, "customer": "def" }, { "orderId": 789, "customer": "ghi" } ] }
Focus
Reduce 2.Path
and selectorder
from the tree view that displays. This tells the Node how to identify a single record in the document.Set
Reduce 2.KeyPath
toorderId
. This tells the Node that theorderId
field located under theorder
element (as specified in thePath
Property) is the unique identifier or key for the record.Run the Workflow and drill in to the
ReducedDocument
Property - you'll see the full set of order records are shown there.Run the Workflow a second time and you'll notice that there are no orders returned. This is because that data is now considered processed and unchanged.
Open the
SourceDocument
Property and change thecustomer
of the first order fromabc
totest
.Run the Workflow again to see that only the changed order is shown in the
ReducedDocument
Property in the Workflow Logs.
Two-step Reduce
In the example above, we used the Node in what is called ReduceCommit
mode. In other words, it's removing unchanged records and committing (storing) the remaining records as 'seen' so that they too will be excluded if they have not been modified by the next time the Node is invoked.
This approach is normally too simplistic for production scenarios because if a step in the Workflow fails after the Reduce Node, the data that was being processed will be removed by the Reduce Node the next time it runs and we'll have no opportunity to correct.
To get around this we split the Reduce operations into two - a Reduce action, and then later, a Commit action.
Rename
Reduce 2
toReduce Step
.Set
Reduce Step.Action
toReduce
and change some of the data in theReduce Step.SourceDocument
Property. For example, change a customer code or two.Add a second
Reduce 2
, rename it toCommit Step
.Set
Commit Step.Group
tocontacts
andCommit Step.Action
toCommit
.Copy the value of
Reduce Step.Path
toCommit Step.Path
and the value ofReduce Step.KeyPath
toCommit Step.KeyPath
.Connect
Reduce Step.ReducedDocument → Commit Step.SourceDocument
.Run the Workflow a few times and you'll notice that the same data is returned. This is because the Commit stage of the Reduce is not running.
Connect
Reduce Step → Commit Step
.Run the Workflow again and then a second time. Notice that no data is returned the second time because the Commit stage has run.
In a more complete Workflow, there will be a series of steps between the first and second Reduce 2
Nodes. If any of those steps fail, the second Reduce 2
Node won't run which means that the data that failed to process will show up for the next run.
Review the Reduce 2 help article for more information about this Node.
String Builder Node
When a Workflow uses iterative Nodes like For Each, you may need to progressively build up a document as each iteration completes. One way of doing this is with the String Builder Node.
Follow these steps in a new Workflow.
Add
Loop
, addString Builder
renamedString Append
, add secondString Builder
renamedString Read
.Set
Loop.Start
to1
,Loop.Stop
to3
andLoop.Increment
to1
.Connect
Loop.Loop → String Append
.Connect
Loop.Finished → String Read
.Set
String Append.Action
toAppend
Set
String Append.VariableName
toexample
.Connect
Loop.Current → String Append.Value
.Set
String Read.Action
toRead
.Set
String Read.VariableName
toexample
.Run the Workflow and check that the value of the
Value
Property in the lastString Builder
Workflow Log which is123
.
Review the String Builder help article for more information about this Node.
Workflow Node
A key aspect to reducing effort is being able to reuse what you have already built. To support this, Flowgear allows not just Nodes to be added to a Workflow but also other Workflows. By enabling Workflows to call other Workflows, you're able to build reusable components.
Exercise 05: Using Sub-Workflows
Follow these steps in a new Workflow.
Add
Loop
.Click
+
to the right of theLoop
Node to open the Node Chooser again and this time, click theWorkflows
tab.Filter for the
Get Employee
exercise Workflow you created earlier and select it.A Node representing the chosen Workflow will be added to the design canvas. Properties that were defined on
Variable Bar
Nodes in the chosen Workflow show up as Properties but notice that they are swapped around - an output Property on a Variable Bar is an input Property on the Workflow Node.Set
Loop.Start
to0
,Loop.Stop
to3
andLoop.Increment
to1
.Connect
Loop.Current → Get Employee.id
.Connect
Loop.Loop → Get Employee
.Run the Workflow to see it iterate through employees with id's 0 through 3. Note that id 0 will fail because that employee id doesn't exist.
Review the Workflow Node help article for more information about calling Workflows from other Workflows.
- Connect
Start.RunNow → Loop
.
Save and run your Workflow, then click Submit Exercise
to grade it.
Key/Value Nodes
Storing Key/Values
In the String Builder example we looked at how we can store or accumulate data as a Workflow executes but this data is not retained after the Workflow completes.
By contrast, the Key/Value Nodes allow you to tag data and then report on it later. They are called Key/Value Nodes because they allow you to associate a key with a value.
For example, if you're integrating sales orders, the key component could be the order number while the value component could be the success or failure information, potentially including an error message if a failure occurred.
Exercise 06: Using Key-Values
Copy the steps we created in the Workflow Node example above into a new Workflow before following these steps.
Connect
Start.RunNow → Loop
.Add
Set-Key Value 2
, renamedStore Success
Node after theGet Employee
.Connect
Get Employee → Store Success
.Set
Store Success.Group
tocontacts
Set
Store Success.Status
toSuccess
.Connect
Loop.Current
toStore Success.Key
.Connect
Get Employee.name → Store Success.Value
.Add a second
Set Key-Value 2
, renamedStore Error
below the existingStore Success
Node.Connect
Get Employee.Error → Store Error
.Set
Store Error.Group
tocontacts
Set
Store Error.Status
toError
.Connect
Loop.Current
toStore Error.Key
.Connect
Start.Last_Error_Info → Store Error.Value
.Run the Workflow to see the error key/value fire for employee id 0 and the success key/value fire for all other employees.
Where the employee is successfully returned, we create a key/value that correlates the id of an employee with their name.
Where the employee does not exist, we create a key/value that correlates the id of the employee with an error message.
Review the Set Key-Value 2 and Set Key-Values 2 help articles for more information about these Nodes.
Save and run your Workflow, then click Submit Exercise
to grade it.
Reading Key/Values
In the example above, we recorded success and error information, now we'll look at how to query that information.
Exercise 07: Reporting with Key-Values
Follow these steps in a new Workflow.
Add
Get Key-Values 2
,Excel
andVariable Bar
.Set
Get Key-Values 2.MatchGroup
tocontacts
.Set
Get Key-Values 2.Emit
toXml
.We're going to convert the Key/Values data to an Excel sheet and the Excel Node requires XML rather than JSON.
Connect
Get Key-Values 2.Result → Excel.TableXml
.Set
Excel.Action
toCreate
.Add
Variable Bar.Report
. Change the Property type fromText
toFile
, then set theFile Extension
toxlsx
.Connect
Excel.ExcelDocument → Variable Bar.Report
.Connect
Start.RunNow → Get Key-Values 2
andGet Key-Values 2 → Excel
Run the Workflow and click the
Download Report.xlsx
Property in the Workflow Log entry for theStart
Node to see the Excel presentation of the Key/Value data.
Review the Get Key-Value 2 and Get Key-Values 2 help articles for more information about these Nodes.
Save and run your Workflow, then click Submit Exercise
to grade it.
Communication
There are often cases where you want a lightweight way to send a notification to yourself or team without having to do any special configuration.
The Email Alert
Node allows you to set a recipient, subject and email body for this purpose. Emails sent from this Node will always use the sender alert@flowgear.net
.
Review the Email Alert help article for more information about this Node.
While this Node is intended for lightweight internal notifications, production workloads would normally use Single Email, direct support ticket creation or push notifications.