Jump to content
OpenSplice DDS Forum

luca.gherardi

Members
  • Content Count

    32
  • Joined

  • Last visited

About luca.gherardi

  • Rank
    Advanced Member

Profile Information

  • Company
    Verity Studios

Recent Profile Visitors

585 profile views
  1. Hi Hans, that makes sense. So if all the data is deliver, order is guaranteed for a single topic single publisher case. Thanks a lot! Luca
  2. Hi Hans, Thanks for your answer, as usual very detailed and very clear! This means that in a system with one publisher and multiple subscribers (assuming that the readers can keep the pace of the writer) there should be no manual configuration required to order the incoming messages on a single topic (case 1). Is that correct? Thanks, Luca
  3. Hi everyone, I would like to understand what is the best way of guarantying that messages are received in the same order in which they are pubblished. In our system we have publishers and subscribers on different nodes, communicating wirelessly. Nodes communicate over different topics. All the messages sent on the same topic have the same value for the key (i.e. #pragma keylist). What are the right configurations for these two problems: Making sure messages on the same topic are received in the order they are published (i.e. publisher send messages 1, 2, and 3 on topic A →
  4. Thanks a lot for the explanation Vivek, that makes sense to me. Best, Luca
  5. Dear Vivek, Thanks for your answer, we will try to disable the durability service and policies and let you know how it goes. Unfortunately, the problem is happening only on a deployed system and it's not easy for us to use the commercial version there. If the changes proposed do not help we will try to get it deployed. One more question. Do you know what could be causing the reader to receive the same message twice after being created (see point below)? Thanks a lot, Luca
  6. Dear Vivek, Thanks a lot for your answer. We will remove the durability service from the ospl.xml configuration. Should we keep the DurablePolicies? Do you have any idea on what could cause the segmentation fault? Could that be the network congestion effect mentioned in your answer? Thanks in advance, Luca
  7. Hi Hans, I can add one more thing. The ospl.xml configuration of Wi-Fi nodes and Ethernet node are different. This was not intentional. Could this be a problem? I report below the differences. If you could let us know which one of the two should be used that would be helpful. On the nodes connected with Wi-Fi we have the following entry in ospl.xml (while we do not have it on the node connected with ethernet). <DurabilityService name="durability"> <Network> <Alignment> <TimeAlignment>false</TimeAlignme
  8. Maybe one more detail, as suggested, we changed the AllowMulticast option to "spdp"
  9. Thanks Hans, We are using the community edition. We have just one topic that does not use volatile durability. These are its settings: topicQoS.reliability.kind = RELIABLE_RELIABILITY_QOS; topicQoS.history.kind = DDS::KEEP_LAST_HISTORY_QOS; topicQoS.history.depth = 5; topicQoS.durability.kind = TRANSIENT_LOCAL_DURABILITY_QOS; topicQoS.durability_service.history_kind = KEEP_LAST_HISTORY_QOS; topicQoS.durability_service.history_depth = 5; The data writer for this topic has the following setting enabled (all the other settings for the write
  10. Hi Hans, We've deployed the solution you proposed and we are experiencing a couple of problems: Our data writer is always alive, while the reader is created when needed. Therefore when we create the reader we received the last N messages sent by the writer (where N is the length of the queue). I expect this to be normal. However, in few circumstances I've seen the messages being received twice. Is that due to some misconfiguration? On some of the nodes connected via Wi-Fi we had a segmentation fault of the application. Unfortunately we couldn't look into the core dumps, but lo
  11. Thanks Hans, There was actually an error on my side. The data writer object was destroyed but I did not destroy it on the domain participant side, so I assume it was still alive. Thanks also for the clarification on durability and history.
  12. Thanks a lot Hans, From an initial test on a sample application I noticed that if I use those settings and destroy the data writer before creating the data reader, the message is still received by the data reader. Is that due to the fact that I'm creating writer and reader in the same process? I don't see the same behavior when running reader and writer in different processes. Regarding history, I guess the topic history settings should be consistent with the durability history settings? I'll set the limits as suggested. If set to all it seems to stop receiving data pretty soon
  13. Hi Hans, Do I understand correctly that for TRANSIENT_LOCAL I have to apply the following settings? Topic: topicQoS.durability.kind = DDS::TRANSIENT_LOCAL_DURABILITY_QOS topicQoS.durability_service.service_cleanup_delay = 0 Data reader: inherit from topic Data writer: inherit from topic writerQoS.writer_data_lifecycle.autodispose_unregistered_instances = true My understanding is that with the default values, how many samples are stored will depends on the Topic history QoS, which in my case is inherit
  14. Hi Hans, Thanks a lot for your feedback! I'll test the suggestion you proposed and get back in case they don't help (it might take a bit). Out of curiosity, why disabling multicast could help? Thanks again, Luca
  15. Thanks a log Hans, I will look into the durability settings. I've a couple of follow up questions: Is there a limit of how many messages a later joining node will receive? Let's say I've a topic with RELIABLE reliability and KEEP ALL history. Would a later joining node receive all the messages published before? Those could be a lot. The reference manual says that for TRANSIENT durability messages are stored in the data distribution service and not in the writer. What does this mean when using the single process (or standalone) configuration? In that case the data is stored in the
×
×
  • Create New...