Jump to content
OpenSplice DDS Forum
rowen

Can a reader get only the last history item and yet have a larger history queue?

Recommended Posts

I would like to configure a DDS topic reader as follows:

- a read queue with at least 100 entries, so I can occasionally fall a bit behind and not lose data

- on startup read only the single most recent historical (late joiner) value for that topic

The only relevant setting I have found is KEEP_LAST_HISTORY_QOS which seems to control both the size of the read queue and also how many historical values are read. I realize I can simply ignore the extra historical data, but it takes time for DDS to copy it. I hope I am missing something simple.

In case it matters, I am using the VOLATILE settings because I only care about historical data if the writer is still alive.

Share this post


Link to post
Share on other sites

HI,

you're confusing history and durability:

  • history
    • using a KEEP_LAST (with depth 'n') history, allows for each instance (i.e. 'key-value') to maintain up to 'n' samples (like a FIFO queue)
    • meaning that when you don't read 'fast enough', oldest samples will be pushed out of the reader's history in favor of newly arriving samples
    • noting that the KEEP_ALL variant (i.c.w. RESOURCE_LIMITS to 'cap' the resource usage) introduced flow-control where you block the writer(s) if you're not reading fast enough
    • so in your case, a KEEP_LAST reader-history setting with a depth of 100
  • durability
    • for non-volatile data there's 3 variants: TRANSIENT_LOCAL, TRANSIENT and PERSISTENT durability
      • all 3 flavors allow to retain some 'historical data' for late-joining applications
      • how much data is maintained is driven by the topic-level (!!) QoS-policy settings called DURABILITY_SERVICE (with a history-kind, depth and resource-limits)
        • these settings are on 'topic-level' as writers/readers might not even be present whilst preserving non-volatile data
        • I know the specification isn't that clear about this, but at least this is how OpenSplice DDS works
      • the difference between the 3 being:
        • TRANSIENT_LOCAL data is maintained at the publisher/writer side and its lifecycle is coupled to that of the writer (i.e .writer gone, data gone)
        • TRANSIENT data is maintained by a (distributed set) of durability-services (in our community-edition, these are implicit within each application's library)
        • PERSISTENT data is also maintained by durability-services and is (optionally) written to non-volatile storage (i.e. disk) so it outlives system downtime
    • guess you're looking for TRANSIENT_LOCAL which  topic-level durability_service QoS-settings of history_kind=KEEP_LAST and history_depth=1
      • which will provide the last written sample (for each instance) for your late-joining application

Hope this helps.. 

Share this post


Link to post
Share on other sites

Thank you very much. For the record: this works great if I set history_length in my default QoS file, and the way to do it in dds Python is to use a dds.DurabilityServiceQosPolicy

However, I was hoping to have writers offer more history than most readers read. One reader wants the extra, but the rest do not. So I tried to create one topic for writing and one for reading, with the latter using a shorter history_length (but otherwise identical policy). Unfortunately, creating the second topic fails. Sounds like I'll just have to discard the extra history. I wish readers supported history_length separately from topics.

Share this post


Link to post
Share on other sites

hmm not sure I understand: the history-depth is 'local' for each reader, so any of those can configure an arbitrary depth.

If its about TRANSIENT/TRANSIENT_LOCAL data historical-data availability for late-joiners, then its indeed the topic-level durability_service QoS setting for that topic that defines the amount of 'maintained-historical-data' for any late-joiner (of that topic). Yet still, if your reader calles wait_for_historical_data it will still be the case that you'll see 'at most' your local reader's history-depth of historical samples (as the rest will be pushed-out during the 'provisioning-of-that-historical-data') 

Share this post


Link to post
Share on other sites

To clarify what I tried: I made a topic with a normal and reasonable history_length and made a writer from that. Then I tried to make another topic instance (using the same domain participant) with a history_length of 1 for my reader. Creating that second topic failed. No big deal, just a minor disappointment. Note: I have no desire to limit the read queue length itself to a length of 1 because I need to be able to handle bursts of data without losing any. Again, not a big deal. I'll just throw out the historical data I don't want.

Share this post


Link to post
Share on other sites

Hmm .. I still smell some confusion about durability and history:

As 'durability' is about preserving  non-volatile data potentially (for TRANSIENT/PERSISTENT) outside the scope/lifecycle of the producers/consumers, configuring that can NOT be related to QoS's that apply to those (potentially non-existing) writers and readers so (therefore) its configured/defined on the topic-level (unde the DURABILITY_SERVICE QoS-attributes). And as topic-definitions are system-wide, those durability-settings are also system-wide.

So then there's reader- (and also writer-)history which define purely-local behavior which in case of reader-history specifies how much historical-samples to maintain in that reader, something typically used to prevent samples 'lost' (pushed out of that FIFO queue) in bursty environments. And that 'depth' is fully controllable when creating a (or multiple) readers.

Share this post


Link to post
Share on other sites

Yes. My use case just doesn't fit the OpenSplice data model as well as it might. I care about the difference between late joiner data (samples written before I created my reader) and current data (samples written after I created my reader, though I may not have actually read them yet). Most readers only want a single sample (the most recent) of late joiner data. (I am trying to avoid the term "historical").

So I call `wait_for_historical_data` on the reader after creating it, then discard all samples except the most recent before I start processing samples. It means I may discard some current data (especially since `wait_for_historical_data` can take a shockingly long time) but it's good enough. It may well be better than trying to process the 5+ seconds of current data that accumulates while I am waiting for `wait_for_historical_data` to finish. In any case I see no alternative, since I have found no way to tell the difference between late joiner data and current data in the reader.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...