\r\n

51Degrees Pipeline Documentation  4.1Newer Version 4.4

Introduction

Flow data is a container that encapsulates all the data related to a single Pipeline process request. This includes input and output data as well as metadata related to the processing such as the details of any errors that occurred.

Data Structure

Flow data has several sub-containers that are used to segment the data that it contains:

Evidence

Before the flow data is passed into the Pipeline, input data is supplied. We refer to this data as 'evidence'. The evidence can be set manually or automatically by using a web integration package (where available) for your web framework of choice.

Visit the evidence page for more details.

Element Data

The responses from each flow element are stored in key/value pair structure within flow data. In each case, the key is the string key of the flow element and the value is an element data instance. The element data structure is visible to each flow element so one element can use the result from another element in its processing.

An example where this is required is the 51Degrees cloud engines. First, an element makes an HTTP request to the cloud and stores the JSON response in the flow data. Later, another element takes that JSON response and parses it to populate a strongly typed object with values for the specific aspect it is concerned with.

Visit the element data page for more details.

Errors

The errors collection stores the details of any errors that occur during processing. The language's default exception handling mechanism will be used to catch and record any exceptions that occur when a flow element is processing. However, processing of later flow elements will continue as normal.

By default, once all flow elements have been processed, an exception will be thrown with details of any errors that have occurred.

The pipeline builder has an option to modify this behavior so that exceptions are totally suppressed. In this situation, the caller is responsible for handling any exceptions by checking the errors collection after processing.

Life Cycle

Creation

Flow data is only ever created by a Pipeline, when the CreateFlowData method is called. This allows the Pipeline to create the flow data internal data structures using implementations that are most appropriate for the configuration of the flow elements in the Pipeline.

For example, thread-safe but slower data collections only need to be used if the Pipeline is configured to execute elements in parallel.

Disposal / Cleanup

Flow data disposal should be left up to the garbage collector. This enures that any resource which may still be needed (e.g. if it has been cached) is freed at the correct point

Thread Safety

Select a language.