- Idea: Put as much data cleaning as possible into the protocol converters and make it easier to do so.
- Default Data Contract (_historian): Use the default data contract called _historian, where all the raw data is stored.
- Stream Processor Usage:
- From the Tag Browser or the Data Flows interface, use the Stream Processor to order and process messages.
- Process messages into your own data contracts, such as _erp.
- Output Options:
- Write the processed data into PostgreSQL using a bridge.
- Send the data to a protocol converter (output).
- Goal: Provide a structured and efficient way to clean and process data, enabling users to integrate data into various systems seamlessly.
Does this idea make sense?