https://www.umh.app logo
Join Discord
Powered by
# general
  • b

    Brian Pribe

    07/16/2025, 11:46 AM
    Portainer or ArgoCD. I've been doing gitops in the lab with both setups and UMH. This was one of my biggest gripes with UMH classic, that it didn't handle well with version control for anyone that wanted to do it themselves. UMH-Core is much better at this cuz of the single config file and container.
  • d

    DanielH

    07/16/2025, 2:42 PM
    Does Portrainer handle Kubernetes? Does any of them handle automatic upgrades? I have been looking at Argo, but not fully convinced it handles all my demands.
  • b

    Brian Pribe

    07/16/2025, 2:42 PM
    Yup and yup.
  • d

    DanielH

    07/17/2025, 9:50 AM
    Which one do you use, and why? Does Argo handle local git? If using it in production i do not want do use an external service to store everything.
  • b

    Brian Pribe

    07/17/2025, 12:51 PM
    Depends on how big your team is and how much control u want. Big team, complicated ci/cd? ArgoCD. Small team? Portainer.
  • l

    lucianofr

    07/17/2025, 9:14 PM
    Are Highbyte and UMH equivalent products/solutions? Competitors?
  • d

    Diederik

    07/17/2025, 9:29 PM
    comparable in some ways but quite different approach in my opinion
  • t

    trentc

    07/17/2025, 10:12 PM
    How would UMH help solve this? This is an actual application I'm working on, I'm curious what approaches people in the community would take.... Omron CJ2 PLCs, FINS or Ethernet/IP would be acceptable. Cyclic reading of a trigger ....when it changes....capture a data block. Data block is variable length, read from plc, with max of 14700 words of data, 16 bit integers representing 0.01 resolution It needs to be collected, transformed, and scaled, along with the part serial number (7 WORDs are ascii), coming from a different PLC, then stored to a database. We have to do this before the part leaves typically 18s....but as short as 10s. https://cdn.discordapp.com/attachments/984082664678125578/1395528674622836806/image.png?ex=687ac6db&is=6879755b&hm=8313e08217a9ebb31ef4565e3e7bac92aa5d95cac3b89e46db90872702472d91&
  • d

    Diederik

    07/18/2025, 8:48 AM
    To continue on this; how do we best handle transactional data with UMH? (typically interfacing/exchanging order data)
  • j

    Jermuk

    07/18/2025, 9:25 AM
    Would it be possible to always fetch the entire datablock and then do that merging in benthos?
  • j

    Jermuk

    07/18/2025, 9:25 AM
    So fetch all of that data every second or so, or whenever it changes. And then do the merging in the UMH.
  • j

    Jermuk

    07/18/2025, 9:25 AM
    Of course only if the PLC supports it. But that would be the easiest method
  • t

    trentc

    07/18/2025, 10:28 AM
    Possible, yes. Ideal, no. This is a lot of data to read cyclically when it only changes every 10-18 seconds. And if this is where we start… it doesn’t give much headroom for reading more data for other purposes…
  • t

    trentc

    07/18/2025, 10:29 AM
    I second this question. With umh I would use node red… but then you aren’t leveraging all of the great data infrastructure umh provides.
  • j

    Jermuk

    07/18/2025, 11:00 AM
    The difficulty here is to provide a good user journey. And it feels like this requires then writing of custom micro services… this is why our idea is to bring the data quickly out of the PLCs as it is easier to work outside. So you know similar solutions that have already solved it?
  • t

    trentc

    07/18/2025, 11:02 AM
    I already solved it with node-red....
  • t

    trentc

    07/18/2025, 11:03 AM
    I agree that the data is easier to work with outside the PLC....but the network load is finite. I have lots of other ideas I want to apply....and if I have to read everything cyclic....I'm going to hit a ceiling quickly... I have a real world scenario where a customer was using ignition....they wanted "all of the data", and I gave them the tag list it was very very long......and it worked "ok" for asset 1....4 $15k usd PLCs that have gigabit ports and cisco switches on a new network infrastructure. However when they added asset 2, 4 more top dog PLCs, intermittent issues started happening....network ports and communication drop outs....PLC ports going dark or choking....even issues in the other area were occuring where programmers were having trouble with unrelated assets. The solution? Edge driven technology. They moved to having the PLC push data to a database on an event....instead of cyclic reading. This completely solved the issue, and they were running 4 assets in the end with no more trouble. This obviously only works with modern controllers though.... We can "emulate" this a bit by using a "flag" in the plc that triggers the data transaction....
  • t

    trentc

    07/18/2025, 11:14 AM
    https://cdn.discordapp.com/attachments/984082664678125578/1395725309130575892/image.png?ex=687b7dfd&is=687a2c7d&hm=3454698009b58f8001e1f4864cd8936d9a78d516a250574836a01907d5c0d130&
  • j

    Jermuk

    07/22/2025, 8:26 AM
    i think for something like this you need to do custom programming. this custom programming then has the advantage: 1. smart, saves bandwith, etc. disadvantage: 1. maintainability. Usually in factories this is done by one person, and the other persons never touch it (in fear of breaking something). Also using something like Node-RED you need to consider that this needs to be kept up-to-date, and with Node-RED being based on NodeJS which releases two major versions per year, it is a significant effort to keep it updated our approach here is to always prefer maintainability over bandwith. better to spend some money on better infrastructure, than to costs of being never able to update your systems (e.g., security) or everyttime you want to do something you need to hire an SI. there are a lot of cases where PLCs cannot handle that load, in those cases a compromise needs to be done. the case you are talking about sounds interesting. gigabit can have over 100MB/sec. so something must have been really unoptimized here in order to reach that. I assume the PLCs simply cannot handle a lot of data pointsper second.
  • j

    Jermuk

    07/22/2025, 8:29 AM
    if anyone else has suggestions what the best way here is to find a good middleground, feel free to suggest
  • d

    DanielH

    07/22/2025, 8:52 AM
    oh. this is exciting. So you basically want a benthos stream with multiple inputs. one input that reads cyclic for a trigger and another input that reads the data block. Whenever the triggers is true it should read input 2 and send to the DB, otherwise it should drop the message? this should be doable in benthos. It is also similar to the alarm plugin i did for benthos which reads an input and listen to a specific part of the message as a trigger. the input 2 is an interval trigger that sends a message every second. When the alarm condition are meet it sends an alarm and the interval trigger is used to have a timer that resends the message after a specific time.. it should not be so complicated to rewrite the plugin to suit your needs. For the choking of the network, it feels like there is some bad configuration. Are you trying to push realtime data and not just data?
  • t

    trentc

    07/22/2025, 10:50 AM
    I was able to solve it with node-red. I got a response from a guy at kepware who says he does this type of triggered reading and store to db all the time, it’s a feature that the product offers i guess. Litmus told me they can solve it with a flow triggering a read once to a tag defining a block read. Tatsoft said they can do it, but with 14,700 tags….each triggered individually. Everything above required some level of customization to achieve it, but appears manageable.
  • t

    trentc

    07/22/2025, 10:55 AM
    The choking of the network was an extreme example from my past, not this application. Yes, what you mentioned about using 2 bethos inputs, then a value triggers a data store. However I was avoiding reading the 14,700 word block cyclical. The data block is populated over 10s at a resolution of the PLC task time, of 0.9ms. It’s not real time, it populates then is at rest until the next test.
  • d

    DanielH

    07/22/2025, 12:48 PM
    would using dual benthos streams and a a complex bloblang processor work or would it be better to have a custom processor that handles this? Can you program a processor your self? my alarm processor can be found at my github to use as a base.
  • t

    trentc

    07/22/2025, 1:20 PM
    This will not be the first time this will come up if you wanna wait for more requests before you develop it then fine but I suggest that you consider creating a processor that can help make this easier considering configuring the processor isn’t always the easiest for the user. I won’t be trying to create my own complex processor or creating my own processor. That’s just not where my focus lies.
  • d

    DanielH

    07/22/2025, 1:24 PM
    i have personally not any use for this my self atm and i do not have the time to develop processors for others right now. I first need to finish some other projects i have.. I will have it in mind if i get some spare time. I am not a developer at UMH i am just a dedicated user.
  • j

    Jermuk

    07/22/2025, 2:14 PM
    i think implementation is not that straightforward, because it is quite dependent on the protocol itself (polling vs subscribe, stateless vs stateful protocols, etc.). what we at UMH want to do here, is not do it like everyone else, but instead figure out a way how one can do it while still having an overview over it all. this means it must be some additional to the bridges, namely the plugins. probably some similar syntax across all plugins, or maybe a processor plugin that triggers the existing input plugins once.
  • d

    DanielH

    07/22/2025, 2:48 PM
    i was looking at dynamic inputs. if that could be an option.. have a trigger thats enable the input to pull data once and then disable it. https://docs.redpanda.com/redpanda-connect/configuration/dynamic_inputs_and_outputs/
  • b

    Brian Pribe

    07/22/2025, 3:17 PM
    Dynamic inputs would be my guess, but what you don't want benthos to do is maintain two connections to the same client. Not sure how the underlying tech handles this, but for benthos-umh plugins, maybe a batch tag group features would be appropriate. Or be able to define a group of tags in inputs, for say a specific processor to use.
  • d

    DanielH

    07/22/2025, 6:40 PM
    I was looking before on something similar with batching. On some machine i poll data at a constant rate using OPC-UA and when a specific value is true, i save all tags to the DB. I solve this today using grouping of the data to one json output and node-red to handle the logic. I was looking in to using subscription of the tags an use a buffer in benthos to hold the latest values until the correct tag triggers the writing to db and then writes all the tags using the same timestamp.. This would basically work in the same way as @trentc is requesting. hmm.. maybe i need to write a custom processor for this, since the caching in benthos can be quite complex to setup.