Jump to content
OpenSplice DDS Forum

Is it possible change "GUID" manually?

Recommended Posts

Dear all,

We are developing a solution with OpenSplice. To avoid repeated messages, we want to manually set the GUID for each subscriber. It's possible?

We are looking for documentation but we have not found anything.

Thank you!

Link to post
Share on other sites

GUID's are automatically/internally generated so are not meant to be manually provided (its surely not part of the DDS-API that you'd want to program against)

I'm curious however what problem you're facing (which apparently is related to 'repeated messages') .. could you elaborate on that a little ?

Link to post
Share on other sites

Hi Hans, 

Thanks you for your reply. We have a few vehicles to connect to a middleware, both can publish in different topics. .

We have established QoS parameters but we dont have the solution. We need persistency for late joiners but when a vehicle or middleware restarts, it receives again all messages, We assume this happens because they have different GUIDs.

These are the values that we have defined:

  • topicQos.value.reliability.kind = DDS.ReliabilityQosPolicyKind.RELIABLE_RELIABILITY_QOS;
  • topicQos.value.durability.kind = DDS.DurabilityQosPolicyKind.TRANSIENT_DURABILITY_QOS;
  • topicQos.value.history.kind = DDS.HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS;
  • topicQos.value.history.depth = 5;
  • topicQos.value.resource_limits.max_samples_per_instance = 5;
  • topicQos.value.destination_order.kind = DDS.DestinationOrderQosPolicyKind.BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS;

So, our idea is that we can set the GUID, GUID will always be the same and Middleware or vehicles will not receive messages again.

Thank you for your help.

Link to post
Share on other sites


Thanks for explaining. 

I have a few remarks/questions

  • I see you're using TRANSIENT durability, which is typically  exploited when using 'federated-deployment' where federations have a configured durability-service that maintains non-volatile data for late-joiners. In case you're using standalone-deployment (aka 'single-process), which is the only option when using the community edition, you can still use TRANSIENT data but that then relies on applications being active that have configured a durability-service (in their 'ospl' configuration) and where the historical data retained by an application is 'tied' to the lifecycle of that application (so when all apps are 'gone', there won't be any retained historical data). Another consequence is that each application potentially retains a copy of ALL historical data, whether its interested in it or not. You might want to consider using TRANSIENT_LOCAL data which is more suitable for standalone deployment as that data is (solely) maintained by the producer (writer) of that TRANSIENT_LOCAL data. Note that the amount of TRANSIENT_LOCAL data retained by writers is  (like for TRANSIENT data ) driven by the topic-level durability_service settings
  • I see that you don't configure the durability_service QoS-settings on the topic which means that defaults will apply i.e. a KEEP_LAST/1 policy (so 1 historical sample per instance will be retained by durability-services in case of TRANSIENT data  or by TRANSIENT_LOCAL writers in case of TRANSIENT_LOCAL data. I agree that this might not sound intuitive, but as non-volatile/durable data needs to be (resource-)controlled, the topic-level durability-service QoS-policies (kind/depth & resource-limits) are used to do that (for both TRANSIENT and TRANSIENT_LOCAL data behavior)
  • I see that you distinguish between 'late-joiners' and 'restarted-apps' which is somewhat different from what is typically assumed where (perhaps even especially) a crashed/restarted app is considered (also) a late-joiner (to regain its state from before the crash/restart).
  • If it would be possible for these app's to detect whether they are restarted or first-started (what you call a late-joiner) you might consider using a 'trick' where you create a reader as volatile (so it won't receive any historical/durable data), but when necessary (i.e. if it is a 'true late-joiner' as you define it),  explicitly call 'wait_for_historical_data()' to retrieve historical data 'anyhow' (there's a timeout you can exploit for how long you'd want to wait for that). We had some specific use-cases in the past where for a specific topic there where both periodic writers (for which no durability is required as updates come in regularly) but also one-shot writers (who's one-shot update wasn't allowed to 'get lost') for which support for this pattern was introduced.
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...