Issue: My Dataflows and Connections are
# umh-support
t
Issue (closed): My Dataflows and Connections are gone....
I am not sure when, or what event caused it...but my connections and dataflows that were there previously are gone.... I have my instances....
f
But the rest of your instance is reachable ? Can you check on the Instance -> Infrastructure page what kind of k3s events you get and on the module page, it should also show under data flows the if there where errors with the dfc's
t
Ok, false alarm....sorry....I had a couple of instances that were offline and they contained almost all of my connections and DFC... They also needed software updates and companion updates
I tried signing out, clearing site data and cookies....then sign in again...still nothing
f
We currently have an issue that data takes 25 secs to first show up, the fix for that is already in the pipeline
t
My instance has been running for 41 minutes.....It says it's online...but still no data in the configuration/setup screen.
I take that back....it has some General info, name, connections and # of tags.
f
Can you open
chrome://inspect/#workers
in chrome. It will give you a list of Shared Workers. Can you: a) Confirm that there is a worker called "poller" from "https.management.umh.app/...." b) Click on
Inspect
and check if in the Network tab it produces pull, getAllUMHInstances and push messages ?
that's true of both of the workers.
f
Ok, it looks like you have an old version of our background worker running, together with a new version. This can happen if you opened the frontend in one tab, then after we published a new version opened another tab
Can you try either closing all UMH tabs and re-opening them, or terminating the workers and refreshing the pages ?
t
i closed my browser, re opened, re-logged in to umh, and it only had 1 helper....but it still wasn't getting info from this instance. I restarted the instance, and still missing information....however it has these errors...
f
That means that our DFC cannot reach your MQTT server. Is it reachable by you ?
192.168.X.Y is something outside our cluster
t
nope, I cannot connect to it....
@Ferdinand I still can't get this instance to function properly...
the MQTT errors might be calling out 192.168.100.25....which is the local IP address of the instance.....but why would it be doing that? since it should just be pointing to inself using a internal host name?
f
Can you show me the config of that dfc ?
t
I can't connect to the instance and get the DFCs from this instance in the console....it says it's offline...but there is still partial status info....
so I'm going to have to dig in to find the dfcs in the file system
@Ferdinand How do I get my DFCs ? I can find the built in ones through using this....
Copy code
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get configmaps -n united-manufacturing-hub
kubectl describe configmap <configmap-id> -n united-manufacturing-hub
f
kubectl get configmaps -n mgmtcompanion -l is-data-flow-component=true
t
Ok, that gives me the list....
f
kubectl get configmaps -n mgmtcompanion
will show the content
f
my fault
kubectl get configmaps -n mgmtcompanion -o yaml
f
- tcp://192.168.100.25:1883 Can you use the hostname here, to use our internal MQTT broker ? - united-manufacturing-hub-mqtt.united-manufacturing-hub.svc.cluster.local:1883
But i guess we also need to fix your instance.
We could do a quick call to check whats going on with your instance, just let me know, when you got time.
t
Yea, I can't very easily update the dfc at the moment....as the management console isn't working for this instance... So I have to figure out how to get it up and running again....
I have about 30 -40 min now....
f
Iv send you an invite
Since yours isn't the first instance to go down due to disk pressure, we also decided to make the warning more visible 🙂
t
yea, wouldn't be a bad idea to figure out how to monitor it and send out notification....because once up an running....I don't expect to look at the console frequently
Maybe it's just using node red to get the disk size and utilize node red to integrate with a notification of choice/
?
f
We could think about publishing the stats of the instance via kafka (which will also write them into timescale), so you could use grafana alerting
(or ofc subscribe using node-red)
t
some built in ability to monitor the instance(s) using the infrastructure would certainly be valuable