Jump to content
OpenSplice DDS Forum

All Activity

This stream auto-updates

  1. Earlier
  2. You can go though the below link to understand the firewall and ddsi networking port controlling in dds communication. https://istkb.adlinktech.com/article/ddsi-networking-service-ports/ Thanks
  3. Hi Hans, that makes sense. So if all the data is deliver, order is guaranteed for a single topic single publisher case. Thanks a lot! Luca
  4. Indeed thats correct. Just don't confuse 'ordering' with 'delivery' as - as explained - not all (ordered-) data may show-up at the reader-side for the following reasons: data was downsampled at the (KEEP_LAST) writer .. for reliable data this is also often called 'last-value-reliability' meaning that in this pattern, the latest data for each instance will be reliably delivered (note 'for each instance' as the 'overwrite behavior', both at sending as well as receiving side' is 'per instance') data was downsampled at the (KEEP_LAST) reader, which should be not a suprise and actu
  5. Hi Hans, Thanks for your answer, as usual very detailed and very clear! This means that in a system with one publisher and multiple subscribers (assuming that the readers can keep the pace of the writer) there should be no manual configuration required to order the incoming messages on a single topic (case 1). Is that correct? Thanks, Luca
  6. Thanks Vivek. It seems then that the firewall is the problem. Are you able to point me towards any info/documentation that describes how or why the firewall could be interfering with the DDS implementation? This would help me describe the problem to our IT department! Thanks
  7. generic 'what' is being received (independent from ordering) is driven by multiple aspects such as reliability and history 'history' settings at both the sender and receiver might impact what is eventually delivered when a writer exploits a KEEP_LAST history and its writing faster than the system/network can handle, data will be 'downsampled' when a reader exploits a KEEP_LAST history and is reading slower than the data arrives, data will be 'downsampled' note that the above does not constitute 'message-loss' .. its just
  8. Hi everyone, I would like to understand what is the best way of guarantying that messages are received in the same order in which they are pubblished. In our system we have publishers and subscribers on different nodes, communicating wirelessly. Nodes communicate over different topics. All the messages sent on the same topic have the same value for the key (i.e. #pragma keylist). What are the right configurations for these two problems: Making sure messages on the same topic are received in the order they are published (i.e. publisher send messages 1, 2, and 3 on topic A →
  9. Hi Trs91, Here is the fix for similar problem: Regarding nature of create_topic() API : It creates a reference to a new or existing Topic under the given name, for a specific type, with the desired QosPolicy settings and if applicable, attaches the optionally specified TopicListener to it. You can get details from below link: http://download.ist.adlinktech.com/docs/Vortex/apis/ospl/cs_api/html/a00928.html#a83346dafb28e1fe8f7f3aa5c545fe97f With best regards, Vivek Pandey Solutions Architect Adlink Technology
  10. UPDATE: I have been able to run this successfully using a computer with less restriction (i.e. a 'personal' rather than 'company' laptop), meaning I believe that the issue is not a DDS or code one, rather an IT infrastructure one. I am not sure what specifically the problem may be and how/what I need to ask the IT department to look into. Can anyone advise? Thanks
  11. Hi, I am using the community version of OpenSlice DDS and am having issues with the basic HelloWorld example. My config is: Windows; building in Visual Studio 2019; using the C# examples; with the OSPL_SP_DDSI.xml config file. I have successfully built the example and can run both the _sub and _pub exe's fine, however the messages published by the publisher are not received by the subscriber. My method for running is to either run both exe's at once from within VS, or to use two Windows CMD command lines, run the release.bat script in each, and then execute the two exe's.
  12. Thank you for the suggestion. Unfortunately I tried it and was not able to get it work: yes the query was accepted, but it did not behave as it was supposed to. Fortunately this is a short-term issue. Eventually I can get rid of the test for the empty string, at which point I'll be able to use a query again. I am quite surprised that this is accepted: qc = QueryCondition(reader, mask, "identity=%0 OR identity=%1", ["foo", ""]) but this is not: qc = QueryCondition(reader, mask, "identity='foo' OR identity=''") I realize for real SQL queries it is important to sanitize in
  13. I am trying to perform a read query that a field either matches a fixed value OR is empty. This is using the Python dds library, though I hope that doesn't affect the answer. What I have tried: (identity = 'saluser@2e2a97a8cda0') OR (identity = '') The problem appears to be in the second part identity = '' Using double quotes instead of single quotes doesn't help. If I add a space between two single quotes then the query is built, but does not do what I want. Any suggestions?
  14. Thanks a lot for the explanation Vivek, that makes sense to me. Best, Luca
  15. Dear Luca, The problem is because of network disconnection and re-connection (for a moment) between two nodes. As the result of disconnection between data writer and data reader the instances go to NOT_ALIVE_NO_WRITER/NOT_ALIVE_DISPOSED state. I suppose you are using take call. take may remove the instance from the reader administration when the instance becomes empty. When the network connection is restored then either the durability service will realign the data or writer (in case of transient_local) may resent it's data again. Because the instance was removed as a result of the take, a
  16. Dear Vivek, Thanks for your answer, we will try to disable the durability service and policies and let you know how it goes. Unfortunately, the problem is happening only on a deployed system and it's not easy for us to use the commercial version there. If the changes proposed do not help we will try to get it deployed. One more question. Do you know what could be causing the reader to receive the same message twice after being created (see point below)? Thanks a lot, Luca
  17. Hi, Thanks for explaining. I have a few remarks/questions I see you're using TRANSIENT durability, which is typically exploited when using 'federated-deployment' where federations have a configured durability-service that maintains non-volatile data for late-joiners. In case you're using standalone-deployment (aka 'single-process), which is the only option when using the community edition, you can still use TRANSIENT data but that then relies on applications being active that have configured a durability-service (in their 'ospl' configuration) and where the historical data re
  18. Hi Hans, Thanks you for your reply. We have a few vehicles to connect to a middleware, both can publish in different topics. . We have established QoS parameters but we dont have the solution. We need persistency for late joiners but when a vehicle or middleware restarts, it receives again all messages, We assume this happens because they have different GUIDs. These are the values that we have defined: topicQos.value.reliability.kind = DDS.ReliabilityQosPolicyKind.RELIABLE_RELIABILITY_QOS; topicQos.value.durability.kind = DDS.DurabilityQosPolicyKind.TRANSIENT_DURABI
  19. DurablePolicies is not required because you are using TRANSIENT_LOCAL_DURABILITY_QOS and ddsi service. If ddsi is used then durability has NOTHING to do with transient-data delivery because ddsi is responsible for the alignment of builtin topics. In fact, you don't need a durability service at all to experience transient-local behavior when ddsi is used. DurablePolicies is only required when you don't run durability service locally , but to request data from a durability service on a remote federation using the client-durability feature. I am not sure about the cause of your segmentation
  20. GUID's are automatically/internally generated so are not meant to be manually provided (its surely not part of the DDS-API that you'd want to program against) I'm curious however what problem you're facing (which apparently is related to 'repeated messages') .. could you elaborate on that a little ?
  21. Dear Vivek, Thanks a lot for your answer. We will remove the durability service from the ospl.xml configuration. Should we keep the DurablePolicies? Do you have any idea on what could cause the segmentation fault? Could that be the network congestion effect mentioned in your answer? Thanks in advance, Luca
  22. Dear all, We are developing a solution with OpenSplice. To avoid repeated messages, we want to manually set the GUID for each subscriber. It's possible? We are looking for documentation but we have not found anything. Thank you!
  23. Hi Sahin, Can start a windows command prompt and run the release.bat file located in the opensplice root folder? When you have done that can you start Visual Studio from this prompt? This ensures that all environment variables that are used to run OpenSplice are set inside Visual Studio.
  24. I have open splice applications which developed on Intellij and Visual studio. When I run those applications on IDEs I got the Report : Unable to connect to domain id = <ANY>. The most common causes of this error are an incorrect configuration file or that OpenSpliceDDS is not running (when using shared memory mode).Internals : u_participantNew/u_participant.c/234/773/1591800447.288588100----------------------------------------------------------------------------------------Report : Failed to register server died callback for domain ospl_shmem_d
  1. Load more activity
×
×
  • Create New...