Jump to content
OpenSplice DDS Forum


  • Content Count

  • Joined

  • Last visited

About e_hndrks

  • Rank
    OpenSplice_DDS Expert
  • Birthday 11/30/1973

Profile Information

  • Gender
  • Location
    Hengelo, Netherlands
  • Company
  1. Hi Bud, If you are interested in the meta-data of normal topics, look at the meta_data field of your DCPSTopic in a tool like the Tuner or the Tester. Regrettably, since this field is not part of the DDS standard, it is not accessible through the standardized DDS API which truncates the meta_data and key_list fields from the DCPSTopic samples. Regards, Erik Hendriks.
  2. Hi Aaron, Although unregistering your instance will probably do the job here, it might also double your network traffic, since each instance is now being created (by writing your sample), and then being unregistered (the unregister_instance operation writes a so called unregister message, that might have a similar footprint as your first message). If you are only using DDS as a stream, then do you really need to use monotonically increasing keys? You could make the topic keyless (just use a #pragma keylist with an empty keylist), which results in a singleton instance. Every sample you write then belongs to this singleton instance, and so you don't need to do any additional unregistering per sample. However, you might need to switch to KEEP_ALL on both your Reader and your Writer to make sure that older samples are not pushed out of your Reader/Writer cache by newer samples. Regards, Erik Hendriks.
  3. Hi Aaron, Just a quick question: are you using a keyed topic? And are you monotonically increasing your key values for every sample you write? Because in that case you are indeed leaking your instances away. An instance remains available in your reader cache until the Writer decides to unregister that instance, either explicitly (using the unregister_instance operation) or implicitly (by deleting the Writer itself). Can you let us know if this was indeed your scenario and if the suggested fix works for you? Regards, Erik Hendriks.
  4. Hi Thibault, The requirement to use a KEEP_ALL policy in combination with a TOPIC scope for coherent data is only for the Writer side: your reader side can safely use a KEEP_LAST policy. The reason for this requirement is that writer history is not only used for maintaining historical data (in case of TRANSIENT_LOCAL), but also for the purposes of re-transmission. So consider the following scenario: Writer A sends a coherent update consisting if instances I1 and I2. Instance I1 is successfully acknowledged by all receivers, but I2 is not (yet) and needs to be re-transmitted. Before this re-transmit, the Writer now sends another coherent update consisting of instance I2 and I3. The second I2 sample now pushes the first one out of the Writer history, making it impossible for the first transaction (I1, I2) to complete on all its receivers. The end result is that some nodes have received the set (I1, I2) followed by (I2, I3) while other nodes have only received (I2, I3), which effectively violates the concept of eventual consistency. For that reason, we mandate you to use a KEEP_all policy on your Writer side. For the Reader side this is not required because the Reader side will only consume history for completed coherent sets: samples belonging to a not yet completed coherent set will be stored in another administration to which the HistoryQosPolicy does not apply. Hope that answers your question. Regards, Erik Hendriks.
  5. Hi aphelix, I see that you first create your writer using the topicQos (in which case the non-overlapping parts such as WriterDataLifecycleQosPolicy get initialized to their default settings which is TRUE in this case), then get the WriterQos, modify its auto_dispose setting and set it back as the new WriterQos. Although that is not illegal according to the DDS specification, we don't support changeable Qos in our DDSI stack yet. Can you try modifying the autodispose setting before you create your Writer and see if that solves your problem? I am curious to hear the result. Regards, Erik.
  6. Hi aphelix, Are the messages disappearing when their Writer is deleted? If so you might want to check your WriterDataLifecycleQosPolicy in your WriterQos for a field named auto_dispose_unregistered_instances. The default setting of this field is TRUE, meaning that when you unregister an instance (and deleting a Writer implicitly unregisters all its instances) you also automatically dispose it. For the persistent store this means the persistent data samples should all be purged, hence an empty store is left. If this is indeed the case, try setting the auto_dispose_unregistered_instances field to FALSE (which is the right thing to do for any TRANSIENT/PERSISTENT data) and see if that solved your problem. (And let us know if it indeed does solve your problem.) Regards, Erik Hendriks.
  7. Hi Reinhard, The data_available callback will trigger on any incoming update, so either on an incoming sample or on an instance lifecycle event. That means that you don't need an additional triggering event to catch the NOT_ALIVE events, and you can handle both the samples and the instance state changes in the same callback. Regards, Erik Hendriks.
  8. Hi Reinhard, I guess you want to read ALIVE data, but take NOT_ALIVE data as to release the resources from instances that will no longer be updated? If that is indeed the case, why not simply do the read call using the ALIVE instance_state mask, followed by a take call using the NOT_ALIVE mask in your data_available_callback? Regards, Erik Hendriks.
  9. Hi Reinhard, What Hans is trying to say is that instance handles are always local to the Entity in which they operate, so if I have two readers subscribing to the same topics, their instance handles will be different even when expressing the same instances. Try to see the instance handle as kind of a pointer into the reader administration: although both readers will have a replicated store, they manage their own instance tables and therefore have their own handles. That is why the content of one Reader is left untouched when you take data out of the other Reader. Likewise, the instance handles from the Writer side are separate from those on the Reader side. I don't really understand your usecase though: why do you need to know the instance handle of an instance to determine whether it is still registered? The instance lifecycle state can tell you exactly that without having to resort to its handle: if its instance_state is still ALIVE, it is still registered. Regards, Erik Hendriks.
  10. Hi Thibault, My colleague just suggested a 4th way to save bandwidth: by exploiting the latency_budget QosPolicy. By default this is set to 0, meaning every sample is put on the wire immediately when it is published. If you set it a bit bigger, multiple samples can be packed together. thus sharing a common header and thereby saving bandwidth. Especially smaller samples may benefit from such an optimization, where you exchange a little bit of latency for better throughput. Regards, Erik Hendriks.
  11. Hi Thibault, The inline Qos is a parameterized list that can be used to transport the relevant Qos values of the Writer at the moment he is writing the sample, but many vendors use this parameterized list for other purposes as well. In our case we use it for the following purposes: to transmit the identity of the sample's Writer and its instance (2 x 12 bytes), which for OpenSplice is different from the identifiers used in standardized ddsi interoperability specification. Please keep in mind that ddsi is an interoperability protocol, and was standardized in a later stage than the core DDS specification. Actually DDSI and the core DDS specification contradict one another in the way they identify their DDS entities: The core DDS spec (and OpenSplice which was build according to this spec) identify an entity by an array of three longs. The ddsi specification identifies an entity by an array of 16 octets, for which it describes exactly how to fill them as to avoid collisions between vendors. Because of this inconsistency we have to include our own identifiers as extra payload in the message. to transmit our own sequence number as opposed to the one mandated by the ddsi spec (2 x 4 bytes). Again, the way sequence numbers are assigned as specified in the ddsi specifcation did not match the way we numbered our messages internally. Therefore we have to add our own internal sequence number as payload to the message. Because of support for coherent_updates, we have to include a 2nd sequence number to indicate the starting message of the coherent update. A 4 byte header needs to precede this extra payload according to the ddsi specification. So all in all the extra payload is needed to correctly correlate data that is received through different paths, for example a TRANSIENT sample that is received directly through ddsi, but also through the alignment protocol of the durability service (which by the way is not standardized yet) and that still uses our own identifiers. There are a number of things you could do to save bandwidth: Use the native networking protocol (only in the commercial edition). Here you don't need to convert from the OpenSplice identifiers into the ddsi identifiers, so no extra payload is needed. Of course this is not an option if you need interoperability with other vendors. Configure ddsi not to include the key-hash that is sent with every message. This will save you about 20 bytes per message. Try to batch multiple small messages into one bigger message before writing them into DDS. We offer an API called the OpenSplice streams, that can do this automatically for you. Hope that gives you some context and directs you to a workable solution. Regards, Erik Hendriks.
  12. This issue will be solved in the upcoming V6.10.2p3 release.
  13. Hi Davide, I think you are using the 'classic' Java API, where Hans was referring to the newer Java5 API. With respect to factories: the Participant acts as a factory for both its Publishers and its Subscribers. You can obtain the default qos settings from a factory using the get_default_xxx_qos(), so for example for the Publisher you would do this: PublisherQosHolder pubQos = new PublisherQosHolder(); int status = participant.get_default_publisher_qos (pubQos); ErrorHandler.checkStatus(status, "DDS.DomainParticipant.get_default_publisher_qos"); pubQos.value.partition.name = new String[1]; pubQos.value.partition.name[0] = "<Some partition name>"; Publisher pub = participant.create_publisher(pubQos.value, null, STATUS_MASK_NONE.value); Hope that clears things up. Regards, Erik Hendriks.
  14. Hi Davide, The problem here is that you use the PUBLISHER_QOS_DEFAULT as a value, where it is actually interpreted as a reference. It is supposed to be this "magical" reference that when encountered is substituted by the default Qos setting in your factory. In your first statement (SubscriberQos subQos = SUBSCRIBER_QOS_DEFAULT.value;) you don't make a deep copy of the value of the SubscriberQos, but instead you just copy the "magical" reference instead. When applied, the create_publisher/create_subscriber calls recognize the "magical" reference and substitute it by the factory default for the Publisher/Subscriber in question, without looking at its actual values. What you should do is to obtain the default value from the factory using the appropriate call (get_default_xxx_qos) and then modify the partition of the resulting value. Regards, Erik Hendriks.
  15. Hi Jami, The list of Conditions you pass to WaitForConditions is meant to be an output-parameter, not an input parameter. In other words, you are expected to attach the conditions you want to be triggered on to the WaitSet, and WaitForConditions returns to you the subset of Conditions that actually triggers. You seem to use the parameter as an input parameter, but if you haven't attached your condition using attachCondition prior, there is nothing for the WaitSet to trigger on. Regards, Erik Hendriks.
  • Create New...