Stream Processor
# general
b
I have the same thoughts around these same topics. I think the ability to create data flows in the same spot you would find those queries would be very powerful. Two types of flows would be great. Domain dataflows and selection dataflows. At this e/s/a/l/edge/_history/tagName.payload.value do some scripting (blob/java) and put it in e/s/a/l/_func/payload. At this e/s/a/l/_func do some scripting to plug it into <>.
j
I think what you describe is what we call stream processor, right? Take data from the UNS, change the name, metadata and maybe a simple calculation and then publish it again, right?
b
Exactly, but I think the idea I had in mind was being able to use processors to standardize on the data models, and then use another processor that looks for that model in the namespace and based on some scripting have it go out to the connector. Allows for scripted discovery and the ability to select models to form the data into.
d
you mean something like the factory insight right? if you put your tags to specific topics from the start like /a/s/a/l/edge/cycleCount and /a/s/a/l/edge/machineRunning it will autogenerate values like OEE to /a/s/a/l/factoryinsight/OEE
b
Exactly. For the factory insight processor to work (selection processor), you need the data to be formatted correctly (domain processor).
d
But you already have that functionality in factory insight. Are you proposing a kind of extension to this with custom schemas/calculations?
b
Absolutely. How am I gonna integrate my erp to look within the namespace and find topics it’s interested in processing further? _erp-func/ needs a specific model and the publisher needs to know that format. WAAAAY quicker to model this data in UMH. Now that it’s formatted correctly, we can use another processor to decide how it wants to process the payload in _erp-func/
d
I do not quite fully understand what you are looking for. When integrating the erp you can just choose what ever structure you like.. In my case i only use a small portion of the erp on some dashboards to visualize which order is started. Our erp is all in its own DB so i need to query it to get data.
b
You caaaaan have everything in one processor, but I like to separate concerns. Devices need to format themselves correctly, and applications need to decide how they want to process the available models.
j
- Idea: Put as much data cleaning as possible into the protocol converters and make it easier to do so. - Default Data Contract (_historian): Use the default data contract called _historian, where all the raw data is stored. - Stream Processor Usage: - From the Tag Browser or the Data Flows interface, use the Stream Processor to order and process messages. - Process messages into your own data contracts, such as _erp. - Output Options: - Write the processed data into PostgreSQL using a bridge. - Send the data to a protocol converter (output). - Goal: Provide a structured and efficient way to clean and process data, enabling users to integrate data into various systems seamlessly. Does this idea make sense?
b
I like this! Let's get the functionality and go from there. There's a lot we can build on top of this.