Jump to content
OpenSplice DDS Forum

Hans van 't Hag

Moderators
  • Content Count

    436
  • Joined

  • Last visited

About Hans van 't Hag

  • Rank
    Product Manager

Contact Methods

  • Website URL
    http://ist.adlinktech.com/

Profile Information

  • Gender
    Male
  • Location
    Hengelo, The Netherlands
  • Company
    ADLINK Technology

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Indeed thats correct. Just don't confuse 'ordering' with 'delivery' as - as explained - not all (ordered-) data may show-up at the reader-side for the following reasons: data was downsampled at the (KEEP_LAST) writer .. for reliable data this is also often called 'last-value-reliability' meaning that in this pattern, the latest data for each instance will be reliably delivered (note 'for each instance' as the 'overwrite behavior', both at sending as well as receiving side' is 'per instance') data was downsampled at the (KEEP_LAST) reader, which should be not a suprise and actu
  2. generic 'what' is being received (independent from ordering) is driven by multiple aspects such as reliability and history 'history' settings at both the sender and receiver might impact what is eventually delivered when a writer exploits a KEEP_LAST history and its writing faster than the system/network can handle, data will be 'downsampled' when a reader exploits a KEEP_LAST history and is reading slower than the data arrives, data will be 'downsampled' note that the above does not constitute 'message-loss' .. its just
  3. Hi, Thanks for explaining. I have a few remarks/questions I see you're using TRANSIENT durability, which is typically exploited when using 'federated-deployment' where federations have a configured durability-service that maintains non-volatile data for late-joiners. In case you're using standalone-deployment (aka 'single-process), which is the only option when using the community edition, you can still use TRANSIENT data but that then relies on applications being active that have configured a durability-service (in their 'ospl' configuration) and where the historical data re
  4. GUID's are automatically/internally generated so are not meant to be manually provided (its surely not part of the DDS-API that you'd want to program against) I'm curious however what problem you're facing (which apparently is related to 'repeated messages') .. could you elaborate on that a little ?
  5. Couple of notes: when threads are reported to make no progress, thats often caused by an overloaded system when watermarks are reported to be reached that's often an indication that data couldn't be delivered when d_namespaceRequests issues are reported, there's an issue with durability (probably i.c.w. the above) So I have a few questions: are you using TRANSIENT and/or PERSISTENT topics and if so, please note that those imply running durability-services which are typically part of federations are you using the community-edition, as that edition does not supp
  6. Hi Luca, Receiving transient-local data from a destroyed data would be a true miracle (as that data is solely maintained at that writer). Are you sure that there are no other writers alive in the system who's data you're receiving ? The only other possibility would have been if your data was TRANSIENT instead TRANSIENT_LOCAL and there would be other app's alive that have an 'embedded' durability-service (as the community-edition doesn't support federated-deployment where such a durability-service would be part of a federation which doesn't necessarily need to include any applic
  7. I don't think you have to set the service-cleanup-delay. W.r.t. the history, when using KEEP_ALL (for the durability-service QoS) you also should set the resource-limits as otherwise its likely that you'll run out of memory.
  8. Wifi is notoriously unreliable when it comes to multicast. If you have an excellent connection that's not an issue but typically the advantages of using multicast (send-once efficiency) are outweighed by the retransmissions required due to massive data-loss when using multicast). I'm not sure however if that would impact your disconnect/reconnect issues .. but at least it's good to know I guess
  9. In steady state (i.e a writer isn't writing samples), a late-joiner will receive not more 'durable' (i.e. non-volatile) samples (of instances) than whats defined in the durability-qos settings for the durability-service as configured via the topic-qos policy. Those settings are max-samples (for all instances), max-samples-per-instance and/or max-instances. W.r.t. where these samples are 'stored' depends on the QoS. When using TRANSIENT_LOCAL durability, those samples are stored 'at the writer' (so are gone when the writer terminates), if the durability QoS is set to TRANSIENT (or PERSISTE
  10. I think you need to distinguish between reliability and durability. Reliability is about the guarantee that 'in steady state' (as in non-steady-state, old samples in the writer-history might already be overwritten by new ones, depending if you y/n use a KEEP_ALL history-policy at the writer-side) the writer-history will be 'eventually' replicated (that is 'delivered') to the reader-history (where it of course can push-out samples from that reader's history, depending on the history-policy of that reader). For short disconnections/reconnections, the reliability-protocol should recover from
  11. hmm .. then I don't know whats happening right away .. I'd suggest to raise a ticket with support (preferably with some example-code and used-configs to reproduce the error)
  12. Hi, The xml-config shows that you're using a federated deployment (shared-memory) thats implies that you're using the commercially-supported version (as the community-edition only supports 'standalone' i.e. 'single-process' deployment). So from that follows that you have a commercial subscription so can (also) raise a support-ticket for questions/bugs etc. Now back to your issue: the ospl-error.log file suggests that the issue could be that you didn't start the federation (i.e. using 'ospl start') before starting the application. Hope this helps, Regards -Hans
  13. Can you share the ospl-info and - error logfiles?
  14. Guys, For years we've been running this forum parallel to the GitHub repository: https://github.com/ADLINK-IST/opensplice We concluded that its more efficient to concentrate on 1 environment and therefore would kindly ask you to direct any remarks/questions to Github. Thanks, -Hans
  15. Guys, For years we've been running this forum parallel to the GitHub repository: https://github.com/ADLINK-IST/opensplice We concluded that its more efficient to concentrate on 1 environment and therefore would kindly ask you to direct any remarks/questions to Github. Thanks, -Hans
×
×
  • Create New...