Jump to content
OpenSplice DDS Forum

All Activity

This stream auto-updates     

  1. Today
  2. Hi aphelix, I see that you first create your writer using the topicQos (in which case the non-overlapping parts such as WriterDataLifecycleQosPolicy get initialized to their default settings which is TRUE in this case), then get the WriterQos, modify its auto_dispose setting and set it back as the new WriterQos. Although that is not illegal according to the DDS specification, we don't support changeable Qos in our DDSI stack yet. Can you try modifying the autodispose setting before you create your Writer and see if that solves your problem? I am curious to hear the result. Regards, Erik.
  3. Hello Erik. Topic records in persistent xml file are deleted when I stop the publisher, as you mentioned. But i have set autodispose_unregistered_instances value as below; But the result is as you indicate. Note: i have 3 durability type. Doesn't seem very important TRANSIENT -> ospl-volatile RESIDENT -> ospl-transient PERSISTENT -> ospl-persistent CREATING_OSPL_RESOURCE; { ReturnCode_t status; CHAR* topicName = const_cast<CHAR*>(osplParam->topicName.c_str()); //(1) Registering type... CORBA::String_var typeName = osplParam->typeSupport->get_type_name(); status = osplParam->typeSupport->register_type(participant.in(), typeName); if (status != DDS::RETCODE_OK){ const STRING excp = "Cannot call TypeSupport::register_type.RetCode is " + OSPLConnector::RetCodeName[status]; ERROR(excp.c_str()); return false; } TopicQos topic_qos; status = participant->get_default_topic_qos(topic_qos); if (status != DDS::RETCODE_OK){ const STRING excp = "Cannot call DomainParticipant_var::get_default_topic_qos.RetCode is " + OSPLConnector::RetCodeName[status]; ERROR(excp.c_str()); return false; } // RELIABILITY.... switch(osplParam->reliability){ case RELIABLE:{ topic_qos.reliability.kind = RELIABLE_RELIABILITY_QOS; break; } case BEST_EFFORT:{ topic_qos.reliability.kind = BEST_EFFORT_RELIABILITY_QOS; break; } } // DURABILITY.... // Setting topic qos policies... HistoryQosPolicy tmpHistoryQosPolicy; switch(osplParam->durability){ case TRANSIENT:{ topic_qos.durability.kind = VOLATILE_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = DpsApplication::GetInstance()->GetTransientBufferSize(); break; } case RESIDENT:{ topic_qos.durability.kind = TRANSIENT_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = isKeyed ? 1 : DpsApplication::GetInstance()->GetResidentBufferSize(); break; } case PERSISTENT:{ topic_qos.durability.kind = PERSISTENT_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = isKeyed ? 1 : DpsApplication::GetInstance()->GetPersistentBufferSize(); break; } default:{ ERROR("Undefined durability type."); return false; } } // SETTING TOPIC HISTORY QOS. topic_qos.history.kind = tmpHistoryQosPolicy.kind; topic_qos.history.depth = tmpHistoryQosPolicy.depth; //(2) CREATING TOPIC. Topic_ptr topic = participant->create_topic(topicName, typeName, topic_qos, NULL, STATUS_MASK_NONE); if(!topic){ ERROR("Cannot call DDS::DomainParticipant::create_topic."); return false; } //(3) CREATING WRITER. DataWriter_ptr writer = publisher->create_datawriter(topic, DATAWRITER_QOS_USE_TOPIC_QOS, NULL, STATUS_MASK_NONE); if(!writer){ ERROR("while calling DDS::Publisher::create_datawriter."); return false; } //(4) CREATING READER. DataReader_ptr reader = subscriber->create_datareader(topic, DATAREADER_QOS_USE_TOPIC_QOS, NULL, STATUS_MASK_NONE); if(!reader){ ERROR("Cannot call DDS::Subscriber::create_datareader."); return false; } //(5) PERSISTENT SPECIFIC OPERATIONS. if(osplParam->durability == PERSISTENT){ <<<<<<<<<<<<<<<<<<<<<< HERE Erik :)))) /* Topic instances are runtime entities for which DDS keeps track of whether (1) there are any live writers, (2) the instance has appeared in the system for the first time (3) the instance has been disposed--meaning explicitly removed from the system. Setting dataWriter’s autodispose_unregistered_instances QoS policy to FALSE (as default is TRUE causing your persistent samples to become NOT_ALIVE_DISPOSED after termination of your writer application causes the instances to be disposed before being unregistered). */ DataWriterQos dw_qos; writer->get_qos(dw_qos); dw_qos.writer_data_lifecycle.autodispose_unregistered_instances = false; writer->set_qos(dw_qos); /* * The wait_for_historical_data() operation waits (blocks) until all "historical" data is received from matched DataWriters. <<<<<<<<<<<<<<<<<<<<<< WAITING HISTORY HERE Erik :)))) * "Historical" data means DDS samples that were written before the DataReader joined the DDS domain.(For persistent and resident) */ DDS::Duration_t a_timeout; a_timeout.sec = 20; a_timeout.nanosec = 0; reader->wait_for_historical_data(a_timeout); } //(6) Reader qos. DataReaderQos dr_qos; reader->get_qos(dr_qos); dr_qos.history.kind = tmpHistoryQosPolicy.kind; dr_qos.history.depth = tmpHistoryQosPolicy.depth; reader->set_qos(dr_qos); vars->osplTopic = topic; vars->osplWriter = writer; vars->osplReader = reader; osplResourceMap.insert(OSPL_RESOURCE_PAIR(topicName, *vars)); // Storing it. :)) return true; /* * Possible others qos services to do..., * Latency budget * Deadline * Transport priority */ } So. I guess I can't set the qos policies properly :((( Thanks...
  4. Hi aphelix, Are the messages disappearing when their Writer is deleted? If so you might want to check your WriterDataLifecycleQosPolicy in your WriterQos for a field named auto_dispose_unregistered_instances. The default setting of this field is TRUE, meaning that when you unregister an instance (and deleting a Writer implicitly unregisters all its instances) you also automatically dispose it. For the persistent store this means the persistent data samples should all be purged, hence an empty store is left. If this is indeed the case, try setting the auto_dispose_unregistered_instances field to FALSE (which is the right thing to do for any TRANSIENT/PERSISTENT data) and see if that solved your problem. (And let us know if it indeed does solve your problem.) Regards, Erik Hendriks.
  5. Hi! I can see the messages in persistent xml file (MyMessage_Topic.xml) after publishing them. But after a while, restarting the application, the topic records in persistent xml file are deleted. Only a few of them remain.Why are the written records deleted? Is there any configuration that i am missed? Note: I am using opensplice version 6.9 . Thanks in advance for your help.
  6. Earlier
  7. Great, that makes it really easy. Thanks a lot this has been a really helpful discussion. I'll drop a quick line once I got everything working Best wishes - Reinhard
  8. Hi Reinhard, The data_available callback will trigger on any incoming update, so either on an incoming sample or on an instance lifecycle event. That means that you don't need an additional triggering event to catch the NOT_ALIVE events, and you can handle both the samples and the instance state changes in the same callback. Regards, Erik Hendriks.
  9. Hi Erik, my plan was to update a user interface to see as a direct feedback when a writer/instance goes down, I think the event of the writer going down would not trigger data_available. I hope I can do exactly as you suggest in the liveliness_changed though, while still using ANY in data_available. Will be trying that next! Thanks -Reinhard
  10. Hi Reinhard, I guess you want to read ALIVE data, but take NOT_ALIVE data as to release the resources from instances that will no longer be updated? If that is indeed the case, why not simply do the read call using the ALIVE instance_state mask, followed by a take call using the NOT_ALIVE mask in your data_available_callback? Regards, Erik Hendriks.
  11. First, thanks for your quick and informed replies. I had a wrong assumption about instance handles, thought of them as something like a domain wide managed handle for instances. Instance_state is what I'm after. So my revised plan would be: reader reads (not takes!) the instances in the data_available callback at some point when one of the instance writers goes down, I get notified with the reader's liveliness_changed callbaclk at this point, I read the instances once again to check which ones are still alive using sample.instance_state. I hope I'm on the right track with that. Thanks again and best wishes -Reinhard
  12. Hi Reinhard, What Hans is trying to say is that instance handles are always local to the Entity in which they operate, so if I have two readers subscribing to the same topics, their instance handles will be different even when expressing the same instances. Try to see the instance handle as kind of a pointer into the reader administration: although both readers will have a replicated store, they manage their own instance tables and therefore have their own handles. That is why the content of one Reader is left untouched when you take data out of the other Reader. Likewise, the instance handles from the Writer side are separate from those on the Reader side. I don't really understand your usecase though: why do you need to know the instance handle of an instance to determine whether it is still registered? The instance lifecycle state can tell you exactly that without having to resort to its handle: if its instance_state is still ALIVE, it is still registered. Regards, Erik Hendriks.
  13. instance-handles are 'local constructs' so can't be shared/looked-up remotely. Instance-awareness doesn't require any notion/knowledge of handles as you can/will be notified about liveliness-changes and/or can query for alive/not-alive/disposed instances.
  14. Hello I'm trying out OpenSplice community using Java5, particularly instances. I'd like to clear up some confusion that I have, but just can't figure it out. C_System instance = new C_System(); instance.A_sourceID.A_resourceId = 250; InstanceHandle instanceHandle1 = writer.registerInstance(instance); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } InstanceHandle instanceHandle2 = reader.lookupInstance(instance); assert(instanceHandle1.equals(instanceHandle2)); // FAILED I was wondering, shouldn't the instance handle after looked up by the reader, be equal to the one registered by the writer? What I'm trying to achieve is I have a DataReader (reading all instances), and at some later point I would like to figure out which of the instances are still registered (Some of them might have been disposed by the writers). For that I would like to check if they can be looked up by the readers, but I can't even get the simple case above working. Thanks for your time. -Reinhard
  15. Hi, You're right w.r.t. loosing some instance-awareness with the streams API so its indeed not a generic solution for all use-cases. W.r.t. Eclipse Cyclone DDS, that's an independent (from ADLINK) project 'owned' by the Eclipse Foundation but was defined/started by ADLINK (who are still the majority of the contributors as well have the project-lead at this moment).
  16. tbr

    DDSI2 Throughput

    Hi Hans, Thanks for you answer. I had a look into the streams API, and played around with the example. Although it is built on top of standard DDS mechanisms I would unfortunately no be able to use the keyed topics features provided by DDS anymore (although you can define a keyed structure in the IDL, this information disappears when using streams). I am basically relying on DDS to provide one queue per instance and let DDS dispatch the samples to the different queues. With the streams API it seams that I need to do it myself (I could for sure have a different stream ids per instance of my struct but this would defeat the purpose of using the streaming API in the first place as, in my system, I have a lot a different instances and not that many sample to handle for each instance). I also had a look into the "Eclipse Cyclone DDS" implementation. This is very interesting. Is it also a project fostered and supported by ADLINK / OpenSplice or is it an independant project? Is there any plans that OpenSplice (whether Community or Commercial) sees this kinds of performances with DDSI?
  17. Hi, I suspect that its one of the DDSI-threads that maxes out at 100% of the core its running on. We've created a small layer on-top of DDS (called 'streams') that transparantly packs the small samples into a sequence-of-that-same-type and with that achieves way better performance (the 'batched' results in that same graph). You can take a look at the bundled streams-example to see if that helps. Alternatively you could take a look at the "Eclipse Cyclone DDS" opensource implementation (https://github.com/eclipse-cyclonedds/cyclonedds ) which re-uses basically the same DDSI implementation but then with a slimmed-down DCPS/API on top of it which also yields 1mln+ samples/sec without reverting to streams-like batching .. PS> the fact that OpenSplice's streams-API is built 'on-top' of (unmodified) DDS implies that it doesn't break interoperability (as it would work on top of anybody's DDS) whereas there are some vendors that have proprietary 'batched-writers' support that does actually break interoperability as that is reflected in/on the wire-protocol too (exploiting non-standard extensions).
  18. Hello, I am using OpenSplice to exchange data with small effective payload (< 30 bytes) at a high rate (more than 2000 samples produced every 10 ms). I am using the DDSI service and am trying to increase the throughput (in messages per seconds). With a Gigabit Ethernet network and two high-end (linux RT) workstations, I reach the limit of 200,000 samples per second. At that point I use about 200 Mb/s of bandwidth and one and a half CPU (150%), 90% of it being spent in the main thread in DataWriter::write related methods. I found this document https://www.adlinktech.com/en/vortex-opensplice-performance.aspx (fifth curve) that seems to back my analysis. Could you confirm that 200,000 samples per second is the maximum throughput attainable with OpenSplice and DDSI? If not, is there any standard way to increase this number? Regards, Thibault Brezillon
  19. Hi Erik, Thanks for your answers. Interoperability is mandatory in my case so unfortunately I cannot use neither the native networking protocol nor the streaming API. I used the LATENCY_BUDGET QoS (set to 10ms) and I saw that there was indeed in average a few more sub-messages per message, thanks. I am now trying to increase the number of sample I can send per second but am now seeing that the bottleneck is the CPU. With a Gigabit Ethernet network and two high-end (linux RT) workstations, I reach the limit of 200,000 samples per second (each sample containing 3, 64 bit integers) and with 2000 different instances. At that point I use about 200 Mb/s of bandwidth and one and a half CPU (150%), 90% of it being spent in the main thread in DataWriter::write related methods. I am using a release build of the latest version of OpenSplice Community. Is there any way to increase this number (specific QoS, differnt API)? Regards, Thibault
  20. Hi Thibault, My colleague just suggested a 4th way to save bandwidth: by exploiting the latency_budget QosPolicy. By default this is set to 0, meaning every sample is put on the wire immediately when it is published. If you set it a bit bigger, multiple samples can be packed together. thus sharing a common header and thereby saving bandwidth. Especially smaller samples may benefit from such an optimization, where you exchange a little bit of latency for better throughput. Regards, Erik Hendriks.
  21. Hi Thibault, The inline Qos is a parameterized list that can be used to transport the relevant Qos values of the Writer at the moment he is writing the sample, but many vendors use this parameterized list for other purposes as well. In our case we use it for the following purposes: to transmit the identity of the sample's Writer and its instance (2 x 12 bytes), which for OpenSplice is different from the identifiers used in standardized ddsi interoperability specification. Please keep in mind that ddsi is an interoperability protocol, and was standardized in a later stage than the core DDS specification. Actually DDSI and the core DDS specification contradict one another in the way they identify their DDS entities: The core DDS spec (and OpenSplice which was build according to this spec) identify an entity by an array of three longs. The ddsi specification identifies an entity by an array of 16 octets, for which it describes exactly how to fill them as to avoid collisions between vendors. Because of this inconsistency we have to include our own identifiers as extra payload in the message. to transmit our own sequence number as opposed to the one mandated by the ddsi spec (2 x 4 bytes). Again, the way sequence numbers are assigned as specified in the ddsi specifcation did not match the way we numbered our messages internally. Therefore we have to add our own internal sequence number as payload to the message. Because of support for coherent_updates, we have to include a 2nd sequence number to indicate the starting message of the coherent update. A 4 byte header needs to precede this extra payload according to the ddsi specification. So all in all the extra payload is needed to correctly correlate data that is received through different paths, for example a TRANSIENT sample that is received directly through ddsi, but also through the alignment protocol of the durability service (which by the way is not standardized yet) and that still uses our own identifiers. There are a number of things you could do to save bandwidth: Use the native networking protocol (only in the commercial edition). Here you don't need to convert from the OpenSplice identifiers into the ddsi identifiers, so no extra payload is needed. Of course this is not an option if you need interoperability with other vendors. Configure ddsi not to include the key-hash that is sent with every message. This will save you about 20 bytes per message. Try to batch multiple small messages into one bigger message before writing them into DDS. We offer an API called the OpenSplice streams, that can do this automatically for you. Hope that gives you some context and directs you to a workable solution. Regards, Erik Hendriks.
  22. Hello, I am using OpenSplice to exchange data with small effective payload (< 30 bytes) at a high rate (more than 2000 samples produced every 10 ms). I am using DDSI2 service and am trying to reduce the bandwidth usage. Using wireshark, I noticed that each sample results in a RTPS Submessage of about 120 bytes with 28 bytes of serializedData and about 60 bytes of inline QoS. Why is the inlineQoS provided with each submessage? Is there a way to configure OpenSplice / DDSI2 in order to optimize the effective payload / global payload ratio? Thanks in advance for your answer. Best regards, Thibault Brezillon
  23. ok, thanks for the reply
  24. Hello student15, Yes, the files are purposefully empty. This is due to some unfortunate way the examples are build within our internal testing setup.
  25. I'm working with the C++ example "Hello World", and have some troubles. The following files are empty (filesize zero): ./opensplice/examples/dcps/HelloWorld/cpp/src/CheckStatus.cpp ./opensplice/examples/dcps/HelloWorld/cpp/src/DDSEntityManager.cpp When I compare with the C-version of Hello World, these files are not empty. Should the files CheckStatus.cpp and DDSEntityManager.cpp really be empty in the C++-example ?
  1. Load more activity
×
×
  • Create New...