Jump to content
OpenSplice DDS Forum

Vivek Pandey

  • Content Count

  • Joined

  • Last visited

About Vivek Pandey

  • Rank

Profile Information

  • Company
    ADLINK Technology
  1. Dear Luca, The problem is because of network disconnection and re-connection (for a moment) between two nodes. As the result of disconnection between data writer and data reader the instances go to NOT_ALIVE_NO_WRITER/NOT_ALIVE_DISPOSED state. I suppose you are using take call. take may remove the instance from the reader administration when the instance becomes empty. When the network connection is restored then either the durability service will realign the data or writer (in case of transient_local) may resent it's data again. Because the instance was removed as a result of the take, all knowledge of that instance is removed and realigned data may then be read again ( that is expected behavior). Note that in the commercial release the instance is not directly removed after a take in case of there are no alive writers of that instances. In that case the instance is maintained for some time before removing it. With best regards, Vivek Pandey
  2. DurablePolicies is not required because you are using TRANSIENT_LOCAL_DURABILITY_QOS and ddsi service. If ddsi is used then durability has NOTHING to do with transient-data delivery because ddsi is responsible for the alignment of builtin topics. In fact, you don't need a durability service at all to experience transient-local behavior when ddsi is used. DurablePolicies is only required when you don't run durability service locally , but to request data from a durability service on a remote federation using the client-durability feature. I am not sure about the cause of your segmentation fault. You can try this scenario with our commercial Opensplice DDS in which you get all the features and services enabled. For evaluation it is free of cost. With best regards, Vivek Pandey
  • Create New...