Jump to content
OpenSplice DDS Forum

JMStranz

Members
  • Content count

    26
  • Joined

  • Last visited

About JMStranz

  • Rank
    Member

Profile Information

  • Company
    Gantner Instruments GmbH
  1. JMStranz

    Configuring QoS policies

    Hi Hans, thank you for your hint. In the meantime, I also found it ... In the source code I also found some XML files where QoS policies are defined. Also, I found the schema file "DDS_QoSProfile.xsd". Is there a special editor for QoS profile files? Best regards, Jan-Marc.
  2. JMStranz

    Configuring QoS policies

    In my DDS applications (which use the same data model) the QoS policies are set in the program code. The applications are written in C ++ and use the "ISO / IEC C ++ 2 DCPS API". The data model is written in an IDL file used by all applications. In contrast, the QoS policies are set individually in the applications. However, this often leads to inconsistencies regarding the QoS policies (QoS mismatching). Is there a way to describe the QoS policies in a separate file (for example XML file)? I know, that the ability to use XML to configure DDS QoS was standardized by OMG as part of the "DDS for LwCCM" specification. Or is there another way to define the QoS policies in a coordinated way for all applications? Best regards, Jan-Marc.
  3. Hi Hans! Thank you for your hint. I already knew the use of "ldd". However, it only indicates that "libc.so.6" is used and from which directory; but not which version of "libc.so.6" was used during the build! I was hoping that maybe a define or an entry would be created during the build process. For explanation: The "OpenSplice Run-Time-System (RTS)" can be easily installed by only unpacking the installation archive. Unfortunately there is no check if the installed RTS is compatible for the respective node! How could this be checked? Best regards, Jan-Marc.
  4. I want to check (on Linux) if the OpenSplice RTS libraries are executable on the node. For that I would like to compare the version of "glibc" installed on the node with the version of "glibc" used to build the libraries. If the versions differ, the DDS application should output a corresponding error message. How can I find the version of "glibc" used to build the OpenSplice RTS libraries? Does the meta configuration file "ospl_metaconfig.xml" contain a relevant entry? For any hint I would be very grateful! Jan-Marc.
  5. In my application I'm using the topic QoS "RELIABLE". In unfavorable network conditions, I receive the warning "writer ... waiting on high watermark due to reader". Is there a way for the application to be notified about this event? Note: I use the "ISO C ++ 2 DCPS" API and the "standalone" deployment. Can I influence the behavior of the WHC via API functions (for example changing the value for the watermark)? Thanks for your help in advance! Best regards, Jan-Marc.
  6. JMStranz

    DDS durability service

    Hi Hans, I did not quite understand your last statement. I use "TRANSIENT_LOCAL" for some topics and the "durability" service is not necessary, right? Is the "durability" service nevertheless necessary so that "late-joiners" still receive the "TRANSIENT_LOCAL" transmitted topics? I would like to remove the "durability" service from the configuration so that not so many warnings are written in the log file. Do I still need the "durability" service? Best regards, Jan-Marc.
  7. JMStranz

    DDS durability service

    Hi Hans, you said that the durability service is not used in connection with "TransientLocal ". Should I or could I then remove the entry <DurabilityService> from the configuration file? Best regards, Jan-Marc.
  8. JMStranz

    DDS durability service

    Hi Hans! I have to contact you again. After I successfully changed the application and now some topics are transmitted with QoS "TransientLocal", I get another warning: Report : WARNING Date : 2019-06-07T05:32:22+0000 Description : Determining master based on majority voting, this may cause alignment issues. Node : qstation Process : GInsDDSVariable <10953> Thread : conflictResolver b3333b40 Internals : 6.9.190321OSS///d_groupLocalListenerDetermineMastersLegacy/d_groupLocalListener.c/1647/0/1559885542.204049829/0 I get this warning on the publisher side. Note: I also have a "monitor application", which is only a subscriber. Unfortunately, I can not find any explanation regarding this warning. Could you tell me something? Best regards, Jan-Marc.
  9. JMStranz

    DDS durability service

    Hi Hans! I have changed my application and now I use "TransientLocal". Then I've stopped all running applications on all nodes. Afterwards, I've installed the new version on all nodes and started it one after the other. After each start of the application on a node I've checked the log files. And see: now everything is working as expected and without the warning! Something strange must have happened. I use standalone deployment; i.e. the mistake you describe for the "federated" architecture can not happen ... Anyway, now it works and I'm satisfied for the time being. As a result, I have learned a lot about the various QoS policies. Thanks again for your help! Best regards, Jan-Marc.
  10. JMStranz

    DDS durability service

    Hi Hans, That's exactly my problem: I don't understand it either. The same application runs on all knots (here: 2 knots) and the warning comes when I make the change. If you can't see an obvious flaw in the code, then I'll examine it very carefully again. I will then, e.g. not just stop the application on the individual nodes, but also restart the nodes to make sure that no "remnants" are left over. I would contact you again if all this did not help. Best regards, Jan-Marc.
  11. JMStranz

    DDS durability service

    Hi Hans, sorry, I forgot to let you know that I only tried to use "dds :: core :: policy :: Durability :: TransientLocal ()" instead of "dds :: core :: policy :: Durability :: Transient ()" for all places where this occurs. For example: dds::topic::qos::TopicQos TransientTopicQos = Participant.default_topic_qos() << dds::core::policy::Durability::Transient(); changed to dds::topic::qos::TopicQos TransientTopicQos = Participant.default_topic_qos() << dds::core::policy::Durability::TransientLocal(); Otherwise I made no further changes. After this change I get the warning. Best regards, Jan-Marc.
  12. JMStranz

    DDS durability service

    Hi Hans, I have attached code snippets that includes the creation of the DDS entities. The relevant parts you'll found here: creation of publisher entities: void CGInsDDSPublisher::Open(void) creation of subscriber entities: void CGInsDDSSubscriber::Open(const std::string& IDFilter) I would be happy about any hint or comment! Best regards, Jan-Marc. DDSDurability.cpp
  13. JMStranz

    DDS durability service

    Hi Hans, Thank you very much for your hints and explanations. However, I am still very much at a loss. I checked twice, if all nodes really work with QoS "TRANSIENT_LOCAL". However, I still receive the warning " Detected Unmatching QoS Policy ...". As already mentioned, I used the example "Chat" as a starting point. Here, too, topics are transferred with QoS "TRANSIENT". The necessary components in my application are created as follows (shown here only for writers; for the reader, this is done in a corresponding manner): 1. // "Participant" with "ReliableTopicQos" dds::domain::DomainParticipant Participant(org::opensplice::domain::default_id()); dds::topic::qos::TopicQos ReliableTopicQos = Participant.default_topic_qos() << dds::core::policy::Reliability::Reliable(); Participant.default_topic_qos(ReliableTopicQos); 2. // "Publisher" on the domain participant. dds::pub::qos::PublisherQos PublisherQos = Participant.default_publisher_qos() << dds::core::policy::Partition(GInsDDS::PARTITION_NAME); dds::pub::Publisher Publisher(Participant, PublisherQos); 3. // "TransientTopicQos" with Durability set to "Transient" to ensure that if a subscriber // joins after the sample is written then DDS will still retain the sample for it. dds::topic::qos::TopicQos TransientTopicQos = Participant.default_topic_qos() << dds::core::policy::Durability::Transient(); 4. // Topic and Writer for "ParticipantInfo" dds::topic::Topic<GInsDDS::ParticipantInfo> ParticipantInfoTopic(Participant, GInsDDS::TOPIC_NAME_PARTICIPANTINFO, TransientTopicQos); dds::pub::qos::DataWriterQos ParticipantInfoWriterQos = ParticipantInfoTopic.qos(); // "ParticipantInfoWriterQos" with "autodispose_unregistered_instances" set to false ParticipantInfoWriterQos << dds::core::policy::WriterDataLifecycle::ManuallyDisposeUnregisteredInstances(); m_ParticipantInfoWriter = dds::pub::DataWriter<GInsDDS::ParticipantInfo>(Publisher, ParticipantInfoTopic, ParticipantInfoWriterQos); This works well and without any warnings regarding the QoS policies for "ParticipantInfo". However, if I change "dds :: core :: policy :: Durability :: Transient ()" to "dds :: core :: policy :: Durability :: TransientLocal ()" (for writers and readers!), then I get the described above warning. I am now at a loss what to do. Could you please help me? Best regards; Jan-Marc.
  14. JMStranz

    DDS durability service

    Hi Hans, I tried using "TRANSIENT_LOCAL" instead of "TRANSIENT" as topic QoS. However, I get the following warnings, although the application on all participating nodes use exacly the same topic QoS: Detected Unmatching QoS Policy: 'Durability' for Topic <GInsDDS_VariableInfo>. ... Detected Unmatching QoS Policy: 'Durability' for Topic <GInsDDS_ParticipantInfo>. If I use "TRANSIENT" again, then these warnings disappear. The transfer of the data takes place in both cases as expected. What could be the reason? Best regards, Jan-Marc.
  15. JMStranz

    DDS durability service

    Hi Hans, Thank you very much for your answer! W.r.t. the DDS configuration: For the nodes for which "multicast" is not possible I've adapted the configuration as follows: <DDSI2Service name="ddsi2"> <General> <NetworkInterfaceAddress>AUTO</NetworkInterfaceAddress> <AllowMulticast>false</AllowMulticast> <EnableMulticastLoopback>false</EnableMulticastLoopback> <CoexistWithNativeNetworking>false</CoexistWithNativeNetworking> </General> ... <Discovery> <Peers> <Peer Address="192.168.5.68"/> <Peer Address="192.168.5.69"/> </Peers> </Discovery> </DDSI2Service> The entry "Peers" contains all (!) nodes who write topics. These nodes are configured with "Multicast" allowed. Should the nodes for which "multicast" is not possible be added for these nodes in the entry "peers"? Best regards, Jan-Marc.
×