Jump to content
OpenSplice DDS Forum

All Activity

This stream auto-updates     

  1. Last week
  2. Hi Subham, The latest C++ ('isocpp2') API indeed also captures the syntax of X-types, yet current OpenSplice (both commercial and community editions) don't (and won't) support that due to architectural implications on that existing code-base. Going forward though we WILL support X-types in the emerging "Eclipse Cyclone DDS" (opensource-) implementation https://github.com/eclipse-cyclonedds/cyclonedds). Eclipse Cyclone DDS is already extensively used in robotics (ROS2) and shows an excellent balance in 'raw' performance (in terms of determinism/latency and efficiency/throughput) and footprint (library-sizes). X-types is certainly on the roadmap for next year for that. As an Eclipse IoT opensource-project we're also welcoming contributions/committers from other companies as well so it's perhaps a good idea to take a look at that project (https://projects.eclipse.org/projects/iot.cyclonedds) Regards, -Hans
  3. Dear Hans, Thank for the reply, Is it at least support in commercial version? As dynamic topic is essential part of the DDS for any large scale project. We are trying all possible way to derived it but due to lack of opensplice document it is difficult to investigate the possibility. Anyway i found below classed related to X-Type can you tell what are the use-cases this. http://download.prismtech.com/docs/Vortex/apis/ospl/isocpp2/html/a00635.html#abb408f35d05245049e0e6cd749ffa531 Regards, Subham
  4. Earlier
  5. Hi Subham, OpenSplice doesn't currently support the X-types specification, but there ARE examples available on how to use the untyped-API to create dynamic readers/writers and topics. You could take a look here: https://github.com/ADLINK-IST/opensplice-tools (especially the pubsub tool) -Hans
  6. I want to create topic type at runtime not from the IDL file(which is static in sense) , Is there any support available in the latest community edition(OSPL_V6_9_190925OSS_RELEASE) . I have come across OMG DDS x-type spec, where it support dynamic type. As per source i can see openspice also have some core source code support of dynamic topic type. If support is available can any body share the example code snippet.
  7. Nothing new or unidentified in the ospl_info, no ospl_error. It's had just been crashing out silently. Yes, java. We played with the shmem ospl xml files but hadn't gotten it to work by the time we figured out the below details. We theorized that the connection between the stack overrun and the canary in gs_report had to have something to do with multithreading. The question was how can we be achieving a stack overrun in non-java code? No amount of exception handling appeared to give us a way to intercept the jvm crashing. We put a synchronization block around the handlers for write and take, and the problem went away. Just a lesson learned in thread-safety for opensplice community edition?
  8. Hi Hans, Thanks for the response and the link. I haven't heard of channel bonding before, so I'll do some reading. Thanks
  9. hmmm .. never heard about this .. is there a related ospl_info.log or ospl_error.log perhaps ? Unlike for Linux there's no such thing as core-files that could be analyzed so it might be hard to debug .. I guess that you're talking about a Java-application ? one thing to try would be to download a trial-version of our 6.10 commercial(ly supported) version and see if it also crashes (and if so try to deploy your application as part of a federation by choosing a ospl_shmem_xxx configuration where it should be much harder to trash ddsi2 .. I'll consult with some of our DDSI experts to see if they'd have a clue ..
  10. Hi Aaron, You're right that OpenSplice doesn't dynamically swap network interfaces. When you provide 'auto' in the config, it will search for the first/best available (and multicast-enabled) adapter but won't change that dynamically once selected at startup. One thing to consider is to exploit channel-bonding which operates 'below' DDS i.e. is transparent for it and which could be used both for either load-balancing or dynamic switch-over between multiple adaptors ..(see for an intro: https://www.thegeekdiary.com/basics-of-ethernet-bonding-in-linux/ ) its not something that I have personal experience (in configuring) with, but I know that its being exploited by some users in the past and most OS's nowadays support that out-of-the-box.
  11. Hi there, Is it possible to have OpenSplice dynamically swap to another network interface if the one it was using becomes unavailable? My particular application is for example, using Ethernet cable to transmit data when the cable is plugged in, but if the cable is unplugged, having OpenSplice automatically switch to Wireless if it is available. I think the answer to this question is "No, that isn't possible because OpenSplice binds to a single specific adapter, and to transmit/receive on another interface you would need to recreate your domain participant due to OS/Driver/low-level stuff". I hope I'm wrong, could anyone provide any clarification? Thanks for your time Best Regards, Aaron
  12. FWIW, the vs disassembler is tagging the crash to ddsi2.dll and gs_report.c with this line: __fastfail(FAST_FAIL_STACK_COOKIE_CHECK_FAILURE);
  13. Good morning everyone, I keep having a weird crashing issue with the latest community release version from github. The issue is following: After an unpredictable time, I get this error message "ddsi2.dll stack based buffer overrun" and then the program crashes. This issue keeps occurring on the Windows 10 version only. When I use the latest community release for the Linux version, no crashes or problems ever happen. Both use the same Java version which is OpenJDK 8 OpenJ9, built by AdoptOpenJDK. It usually happens at some point when I'm trying to send something. Would anyone know why the stack based overrun is happening?
  14. Hi Erik, I've finally got back around to this issue - I replaced my keyed topics with keyless topics. After some initial issues receiving (incorrectly configured viewstate) I managed to get my application working again. It seems as though it is no longer leaking. I will leave it running in the background and monitor it, but I believe you've provided me with the fix to this issue. Thank you for taking the time to provide me with more insight into OpenSplice Best Regards, Aaron
  15. Thanks Erik, There is no real reason for me to use keys here, I'm really only using one topic instance per topic type, one publisher, one subscriber. I will change to keyless topics and KEEP_ALL Best Regards, Aaron
  16. Hi Aaron, Although unregistering your instance will probably do the job here, it might also double your network traffic, since each instance is now being created (by writing your sample), and then being unregistered (the unregister_instance operation writes a so called unregister message, that might have a similar footprint as your first message). If you are only using DDS as a stream, then do you really need to use monotonically increasing keys? You could make the topic keyless (just use a #pragma keylist with an empty keylist), which results in a singleton instance. Every sample you write then belongs to this singleton instance, and so you don't need to do any additional unregistering per sample. However, you might need to switch to KEEP_ALL on both your Reader and your Writer to make sure that older samples are not pushed out of your Reader/Writer cache by newer samples. Regards, Erik Hendriks.
  17. Hi Erik, Thanks for the quick reply. Actually yes, I am using a keyed topic, and yes, I am generating the key from an unsigned short "id", which I am incrementing each time I send a new sample. My writer exists for the lifetime of my application, so it sounds like you are right. I will use the unregister_instance operation first thing tomorrow and report back. Thanks once again Best Regards, Aaron
  18. Hi Aaron, Just a quick question: are you using a keyed topic? And are you monotonically increasing your key values for every sample you write? Because in that case you are indeed leaking your instances away. An instance remains available in your reader cache until the Writer decides to unregister that instance, either explicitly (using the unregister_instance operation) or implicitly (by deleting the Writer itself). Can you let us know if this was indeed your scenario and if the suggested fix works for you? Regards, Erik Hendriks.
  19. Hi all, I am a bit new to DDS/OpenSplice - I've written a transmitter and receiver that streams video using pub/sub (I'm not using OpenSplice Streams though, just DCPS). It all seems to work perfectly, however, when it is running over a long period of time, for the reader application, the memory usage will increase in every thread, until the kernel OOM kills my application. This smells like a memory leak but I checked it with valgrind and it seems fine. Additionally, stopping the data stream frees up the memory, so if it was leaking I wouldn't expect that to happen. I've set up my QoS as best I can, in order to achieve what I like. I want max throughput, and don't care if I drop a sample or two along the way. So I've set to volatile so that dropped samples aren't stored anywhere, but I suspect somewhere, some stuff is being stored in memory. So anyway, I suspect that this is just a QoS/configuration issue - are there any QoS policies I may have missed that could cause this to occur? Here are the ones I've configured in my initializer list, I guess all other policies would be using the defaults: reader_topic_qos_{dp_.default_topic_qos() << dds::core::policy::Durability::Volatile() << dds::core::policy::Lifespan(dds::core::Duration(1,0)) << dds::core::policy::Liveliness(dds::core::policy::LivelinessKind::AUTOMATIC ,dds::core::Duration(1,0)) << dds::core::policy::ResourceLimits(10) << dds::core::policy::Reliability::BestEffort()}, Thanks for your time
  20. Just a small note on interoperability and our streams-API: since our streams-API is completely built ON_TOP of (unmodified) DDS, it does NOT impact interoperability (as some other vendor's solution for batched-writers does) .. the fact that we make life easy by creating a sequence_of_original_type topic doesn't cause interoperability issues as that topic can be read by any DDS implementation .. if you view our streams-API as a 'utility-library' that library can also be used i.c.w. other DDS-implementations .. I know that streams is likely not the right-solution for your (right) problem, but there's a general misunderstanding w.r.t. the relationship between interoperability (that is driven by wire-formats) and our streams-API (that doesn't impact wire-formats) ..
  21. Hi Thibault, The requirement to use a KEEP_ALL policy in combination with a TOPIC scope for coherent data is only for the Writer side: your reader side can safely use a KEEP_LAST policy. The reason for this requirement is that writer history is not only used for maintaining historical data (in case of TRANSIENT_LOCAL), but also for the purposes of re-transmission. So consider the following scenario: Writer A sends a coherent update consisting if instances I1 and I2. Instance I1 is successfully acknowledged by all receivers, but I2 is not (yet) and needs to be re-transmitted. Before this re-transmit, the Writer now sends another coherent update consisting of instance I2 and I3. The second I2 sample now pushes the first one out of the Writer history, making it impossible for the first transaction (I1, I2) to complete on all its receivers. The end result is that some nodes have received the set (I1, I2) followed by (I2, I3) while other nodes have only received (I2, I3), which effectively violates the concept of eventual consistency. For that reason, we mandate you to use a KEEP_all policy on your Writer side. For the Reader side this is not required because the Reader side will only consume history for completed coherent sets: samples belonging to a not yet completed coherent set will be stored in another administration to which the HistoryQosPolicy does not apply. Hope that answers your question. Regards, Erik Hendriks.
  22. Hi, I am trying to use coherent change sets on a topic but the creation of the DataWriter fails because the History QoS (KEEP_LAST) is not compatible with the PRESENTATION QoS (access_scope = TOPIC). The logic in my system requires only to keep the last value written to the topic for each instance so I don't need KEEP_LAST. Is there a way to have the behaviour of KEEP_LAST with coherent change sets? What is the rationale behind requiring KEEP_ALL in order to use coherent change sets? Thanks in advance for your answer. Best regards, Thibault
  23. Hi Eric. I created all qos variables and set all properties first . Then i created topic, reader and writer like you said. It is now working properly. Thank you very much for your help. Best reagards.
  24. Hi aphelix, I see that you first create your writer using the topicQos (in which case the non-overlapping parts such as WriterDataLifecycleQosPolicy get initialized to their default settings which is TRUE in this case), then get the WriterQos, modify its auto_dispose setting and set it back as the new WriterQos. Although that is not illegal according to the DDS specification, we don't support changeable Qos in our DDSI stack yet. Can you try modifying the autodispose setting before you create your Writer and see if that solves your problem? I am curious to hear the result. Regards, Erik.
  25. Hello Erik. Topic records in persistent xml file are deleted when I stop the publisher, as you mentioned. But i have set autodispose_unregistered_instances value as below; But the result is as you indicate. Note: i have 3 durability type. Doesn't seem very important TRANSIENT -> ospl-volatile RESIDENT -> ospl-transient PERSISTENT -> ospl-persistent CREATING_OSPL_RESOURCE; { ReturnCode_t status; CHAR* topicName = const_cast<CHAR*>(osplParam->topicName.c_str()); //(1) Registering type... CORBA::String_var typeName = osplParam->typeSupport->get_type_name(); status = osplParam->typeSupport->register_type(participant.in(), typeName); if (status != DDS::RETCODE_OK){ const STRING excp = "Cannot call TypeSupport::register_type.RetCode is " + OSPLConnector::RetCodeName[status]; ERROR(excp.c_str()); return false; } TopicQos topic_qos; status = participant->get_default_topic_qos(topic_qos); if (status != DDS::RETCODE_OK){ const STRING excp = "Cannot call DomainParticipant_var::get_default_topic_qos.RetCode is " + OSPLConnector::RetCodeName[status]; ERROR(excp.c_str()); return false; } // RELIABILITY.... switch(osplParam->reliability){ case RELIABLE:{ topic_qos.reliability.kind = RELIABLE_RELIABILITY_QOS; break; } case BEST_EFFORT:{ topic_qos.reliability.kind = BEST_EFFORT_RELIABILITY_QOS; break; } } // DURABILITY.... // Setting topic qos policies... HistoryQosPolicy tmpHistoryQosPolicy; switch(osplParam->durability){ case TRANSIENT:{ topic_qos.durability.kind = VOLATILE_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = DpsApplication::GetInstance()->GetTransientBufferSize(); break; } case RESIDENT:{ topic_qos.durability.kind = TRANSIENT_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = isKeyed ? 1 : DpsApplication::GetInstance()->GetResidentBufferSize(); break; } case PERSISTENT:{ topic_qos.durability.kind = PERSISTENT_DURABILITY_QOS; tmpHistoryQosPolicy.kind = KEEP_LAST_HISTORY_QOS; tmpHistoryQosPolicy.depth = isKeyed ? 1 : DpsApplication::GetInstance()->GetPersistentBufferSize(); break; } default:{ ERROR("Undefined durability type."); return false; } } // SETTING TOPIC HISTORY QOS. topic_qos.history.kind = tmpHistoryQosPolicy.kind; topic_qos.history.depth = tmpHistoryQosPolicy.depth; //(2) CREATING TOPIC. Topic_ptr topic = participant->create_topic(topicName, typeName, topic_qos, NULL, STATUS_MASK_NONE); if(!topic){ ERROR("Cannot call DDS::DomainParticipant::create_topic."); return false; } //(3) CREATING WRITER. DataWriter_ptr writer = publisher->create_datawriter(topic, DATAWRITER_QOS_USE_TOPIC_QOS, NULL, STATUS_MASK_NONE); if(!writer){ ERROR("while calling DDS::Publisher::create_datawriter."); return false; } //(4) CREATING READER. DataReader_ptr reader = subscriber->create_datareader(topic, DATAREADER_QOS_USE_TOPIC_QOS, NULL, STATUS_MASK_NONE); if(!reader){ ERROR("Cannot call DDS::Subscriber::create_datareader."); return false; } //(5) PERSISTENT SPECIFIC OPERATIONS. if(osplParam->durability == PERSISTENT){ <<<<<<<<<<<<<<<<<<<<<< HERE Erik :)))) /* Topic instances are runtime entities for which DDS keeps track of whether (1) there are any live writers, (2) the instance has appeared in the system for the first time (3) the instance has been disposed--meaning explicitly removed from the system. Setting dataWriter’s autodispose_unregistered_instances QoS policy to FALSE (as default is TRUE causing your persistent samples to become NOT_ALIVE_DISPOSED after termination of your writer application causes the instances to be disposed before being unregistered). */ DataWriterQos dw_qos; writer->get_qos(dw_qos); dw_qos.writer_data_lifecycle.autodispose_unregistered_instances = false; writer->set_qos(dw_qos); /* * The wait_for_historical_data() operation waits (blocks) until all "historical" data is received from matched DataWriters. <<<<<<<<<<<<<<<<<<<<<< WAITING HISTORY HERE Erik :)))) * "Historical" data means DDS samples that were written before the DataReader joined the DDS domain.(For persistent and resident) */ DDS::Duration_t a_timeout; a_timeout.sec = 20; a_timeout.nanosec = 0; reader->wait_for_historical_data(a_timeout); } //(6) Reader qos. DataReaderQos dr_qos; reader->get_qos(dr_qos); dr_qos.history.kind = tmpHistoryQosPolicy.kind; dr_qos.history.depth = tmpHistoryQosPolicy.depth; reader->set_qos(dr_qos); vars->osplTopic = topic; vars->osplWriter = writer; vars->osplReader = reader; osplResourceMap.insert(OSPL_RESOURCE_PAIR(topicName, *vars)); // Storing it. :)) return true; /* * Possible others qos services to do..., * Latency budget * Deadline * Transport priority */ } So. I guess I can't set the qos policies properly :((( Thanks...
  26. Hi aphelix, Are the messages disappearing when their Writer is deleted? If so you might want to check your WriterDataLifecycleQosPolicy in your WriterQos for a field named auto_dispose_unregistered_instances. The default setting of this field is TRUE, meaning that when you unregister an instance (and deleting a Writer implicitly unregisters all its instances) you also automatically dispose it. For the persistent store this means the persistent data samples should all be purged, hence an empty store is left. If this is indeed the case, try setting the auto_dispose_unregistered_instances field to FALSE (which is the right thing to do for any TRANSIENT/PERSISTENT data) and see if that solved your problem. (And let us know if it indeed does solve your problem.) Regards, Erik Hendriks.
  27. Hi! I can see the messages in persistent xml file (MyMessage_Topic.xml) after publishing them. But after a while, restarting the application, the topic records in persistent xml file are deleted. Only a few of them remain.Why are the written records deleted? Is there any configuration that i am missed? Note: I am using opensplice version 6.9 . Thanks in advance for your help.
  1. Load more activity
×
×
  • Create New...