openTCS-NeNa | An open-source ROS 2 vehicle driver for OpenTCS | Robotics library

 by   nielstiben Java Version: v2.0.1 License: BSD-3-Clause

kandi X-RAY | openTCS-NeNa Summary

kandi X-RAY | openTCS-NeNa Summary

openTCS-NeNa is a Java library typically used in Manufacturing, Utilities, Automotive, Automation, Robotics, OpenCV applications. openTCS-NeNa has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. However openTCS-NeNa has 2 bugs. You can download it from GitHub.

OpenTCS and ROS2 are both open-source software packages. OpenTCS can be used as a fleet manager to autonomously manage self-driving vehicles. ROS2 is a widely used software package that takes care of common functionality that many AGVs share, such as support for sensors, cameras and SLAM. The OpenTCS-NeNa software is an OpenTCS vehicle driver that fits the link between ROS2 robots and the OpenTCS Fleet Manager. The initial development of this vehicle driver was part of my Bachelor Thesis at Saxion University, and is part of project NeNa. Ever since, I continued developing this software to what is today.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              openTCS-NeNa has a low active ecosystem.
              It has 62 star(s) with 30 fork(s). There are 7 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 10 open issues and 9 have been closed. On average issues are closed in 75 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of openTCS-NeNa is v2.0.1

            kandi-Quality Quality

              OutlinedDot
              openTCS-NeNa has 2 bugs (1 blocker, 0 critical, 0 major, 1 minor) and 160 code smells.

            kandi-Security Security

              openTCS-NeNa has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              openTCS-NeNa code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              openTCS-NeNa is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              openTCS-NeNa releases are available to install and integrate.
              Build file is available. You can build the component from source.
              Installation instructions are available. Examples and code snippets are not available.
              It has 5599 lines of code, 346 functions and 97 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed openTCS-NeNa and discovered the below as its top functions. This is intended to give you an instant insight into openTCS-NeNa implemented functionality, and help decide if they suit your requirements.
            • Initialize the components
            • Dispatch to dispatch to a point button
            • Display dispatch to coordinate system action
            • Enable the enable button action button
            • Enables execution
            • Start a node
            • Enable navigation goal task listener
            • This method is called when the movement goal is executed
            • Executes an action by name
            • Provides panels available for the given description and process model
            • Disable state request
            • Called when a navigation goal has been updated
            • Initialize the robot
            • Disable this transport
            • Create a custom transferable process model
            • Enable input validation
            • Reset all position values
            • Checks whether the list of operations can be processed
            • Called when a navigation goal is completed
            • Generate an AMclP response
            • Called when a navigation goal is rejected
            • Enqueues a specified request
            • Compute the checksum for a telegram
            • Tries to match the given response
            • Installs the nexus communication adapter
            • Mark the document as valid
            Get all kandi verified functions for this library.

            openTCS-NeNa Key Features

            No Key Features are available at this moment for openTCS-NeNa.

            openTCS-NeNa Examples and Code Snippets

            No Code Snippets are available at this moment for openTCS-NeNa.

            Community Discussions

            QUESTION

            URDF loading incorrectly in RVIZ but correctly on Gazebo, what is the issue?
            Asked 2022-Mar-22 at 13:41

            I have imported a urdf model from Solidworks using SW2URDF plugin. The model loads correctly on Gazebo but looks weird on RVIZ, even while trying to teleoperate the robot, the revolute joint of the manipulator moves instead of the wheels. Is there anyone who has faced this issue before or has a solution to it? Here is how it looks on Gazebo

            Here is how it looks on RVIZ

            Here is the URDF file of the Model:

            ...

            ANSWER

            Answered 2022-Mar-22 at 13:41

            So, I realized two problems:

            First, you have to change the fixed frame in the global options of RViz to world or provide a transformation between map and world.

            Second, your URDF seems broken. There is something wrong with your revolute-typed joints. Changing their type to fixed fixed the problem. I think, it's best if you ask a separate question with a minimal example regarding this second problem.

            Source https://stackoverflow.com/questions/71567347

            QUESTION

            How can i find the position of "boundary boxed" object with lidar and camera?
            Asked 2022-Feb-24 at 23:23

            This question is related to my final project. In gazebo simulation environment, I am trying to detect obstacles' colors and calculate the distance between robot and obstacles. I am currently identifying their colors with the help of OpenCV methods (object with boundary box) but I don't know how can i calculate their distances between robot. I have my robot's position. I will not use stereo. I know the size of the obstacles. Waiting for your suggestions and ideas. Thank you!

            My robot's topics :

            • cameras/camera/camera_info (Type: sensor_msgs/CameraInfo)
            • cameras/camera/image_raw (Type: sensor_msgs/Image)
            • sensors/lidars/points (Type: sensor_msgs/PointCloud2)
            ...

            ANSWER

            Answered 2022-Feb-24 at 23:23

            You can project the point cloud into image space, e.g., with OpenCV (as in here). That way, you can filter all points that are within the bounding box in the image space. Of course, projection errors because of differences between both sensors need to be addressed, e.g., by removing the lower and upper quartile of points regarding the distance to the LiDAR sensor. You can use the remaining points to estimate the distance, eventually.

            We have such a system running and it works just fine.

            Source https://stackoverflow.com/questions/71254308

            QUESTION

            What is the more common way to build up a robot control structure?
            Asked 2022-Feb-12 at 15:18

            I’m a college student and I’m trying to build an underwater robot with my team.

            We plan to use stm32 and RPi. We will put our controller on stm32 and high-level algorithm (like path planning, object detection…) on Rpi. The reason we design it this way is that the controller needs to be calculated fast and high-level algorithms need more overhead.

            But later I found out there is tons of package on ROS that support IMU and other attitude sensors. Therefore, I assume many people might build their controller on a board that can run ROS such as RPi.

            As far as I know, RPi is slower than stm32 and has less port to connect to sensor and motor which makes me think that Rpi is not a desired place to run a controller.

            So I’m wondering if I design it all wrong?

            ...

            ANSWER

            Answered 2022-Feb-12 at 15:18

            Robot application could vary so much, the suitable structure shall be very much according to use case, so it is difficult to have a standard answer, I just share my thoughts for your reference.

            In general, I think Linux SBC(e.g. RPi) + MCU Controller(e.g. stm32/esp32) is a good solution for many use cases. I personally use RPi + ESP32 for a few robot designs, the reason is,

            1. Linux is not a good realtime OS, MCU is good at handling time critical tasks, like motor control, IMU filtering;
            2. Some protection mechnism need to be reliable even when central "brain" hang or whole system running into low voltage;
            3. MCU is cheaper, smaller and flexible to distribute to any parts inside robot, it also helps our modularized design thinking;
            4. Many new MCU is actually powerful enough to handle sophisticated tasks and could offload a lot from the central CPU;

            Source https://stackoverflow.com/questions/71090653

            QUESTION

            How can I find angle between two turtles(agents) in a network in netlogo simulator?
            Asked 2021-Dec-15 at 10:03

            In a formation robots are linked with eachother,number of robots in a neighbourhood may vary. If one robot have 5 neighbours how can I find the angle of that one robot with its other neighbour?

            ...

            ANSWER

            Answered 2021-Dec-15 at 10:03

            (Following a comment, I replaced the sequence of <face + read heading> with just using towards, wich I had overlooked as an option. For some reason the comment I am referring to has been deleted quickly so I don't know who gave the suggestion, but I read enough of it from the cell notification)

            In NetLogo it is often possible to use turtles' heading to know degrees.

            Since your agents are linked, a first thought could be to use link-heading, which directly reports the heading in degrees from end1 to end2.

            However note that this might not be ideal: using link-heading will work spotlessly only if you are interested in knowing the heading from end1 to end2, which means:

            If that's something that you are interested in, fine. But it might not be so! For example, if you have undirected links and are interested in knowing the angle from turtle 1 to turtle 0, using link-heading will give you the wrong value:

            Source https://stackoverflow.com/questions/70197548

            QUESTION

            Targetless non-overlapping stereo camera calibration
            Asked 2021-Dec-08 at 03:13

            Overlapping targetless stereo camera calibration can be done using feautre matchers in OpenCV and then using the 8-point or 5-point algoriths to estimate the Fundamental/Essential matrix and then use those to further decompose the Rotation and Translation matrices.

            How to approach a non-overlapping stereo setup without a target?

            Can we use visual odometry (like ORB SLAM) to calculate trajectory of both the cameras (cameras would be rigidly fixed) and then use hand-eye calibration to get the extrinsics? If yes, how can the transformations of each trajectory mapped to the gripper->base transformation and target->camera transformation? Or is there another way to apply this algorithm?

            If hand-eye calibration cannot be used, is there any recommendations to achieve targetless non-overlapping stereo camera calibration?

            ...

            ANSWER

            Answered 2021-Dec-08 at 03:13

            Hand-eye calibration is enough for your case. Just get the trajectory from each camera by running ORBSLAM. Then, calculate the relative trajectory poses on each trajectory and get extrinsic by SVD. You might need to read some papers to see how to implement this.

            This is sometimes called motion-based calibration.

            Source https://stackoverflow.com/questions/70034304

            QUESTION

            ROS: Publish topic without 3 second latching
            Asked 2021-Nov-29 at 18:44

            As a premise I must say I am very inexperienced with ROS.

            I am trying to publish several ros messages but for every publish that I make I get the "publishing and latching message for 3.0 seconds", which looks like it is blocking for 3 seconds.

            I'll leave you with an example of how I am publishing one single message:

            ...

            ANSWER

            Answered 2021-Nov-29 at 18:44

            Part of the issue is that rostopic CLI tools are really meant to be helpers for debugging/testing. It has certain limitations that you're seeing now. Unfortunately, you cannot remove that latching for 3 seconds message, even for 1-shot publications. Instead this is a job for an actual ROS node. It can be done in a couple of lines of Python like so:

            Source https://stackoverflow.com/questions/70157995

            QUESTION

            How to access the Optimization Solution formulated using Drake Toolbox
            Asked 2021-Nov-20 at 02:41

            A c++ novice here! The verbose in the terminal output says the problem is solved successfully, but I am not able to access the solution. What is the problem with the last line?

            ...

            ANSWER

            Answered 2021-Nov-20 at 02:41

            You will need to change the line

            Source https://stackoverflow.com/questions/70042606

            QUESTION

            Detect when 2 buttons are being pushed simultaneously without reacting to when the first button is pushed
            Asked 2021-Oct-22 at 16:58

            I'm programming a robot's controller logic. On the controller there is 2 buttons. There is 3 different actions tied to 2 buttons, one occurs when only the first button is being pushed, the second when only the second is pushed, and the third when both are being pushed.

            Normally when the user means to hit both buttons they would hit one after another. This has the consequence of executing a incorrect action.

            Here is part of the code.

            ...

            ANSWER

            Answered 2021-Oct-22 at 16:58

            You could use a short timer, which is restarted every time a button press is triggered. Every time the timer expires, you check all currently pressed buttons. Of course, you will need to select a good timer duration to make it possible to press two buttons "simultaneously" while keeping your application feel responsive.

            You can implement a simple timer using a counter in your loop. However, at some point you will be happier with an event based architecture.

            Source https://stackoverflow.com/questions/69676420

            QUESTION

            Why does my program makes my robot turn the power off?
            Asked 2021-Oct-19 at 05:05

            I'm trying to put together a programmed robot that can navigate the room by reading instructions off signs (such as bathroom-right). I'm using the AlphaBot2 kit and an RPI 3B+.

            the image processing part works well, but for some reason, the MOTION CONTROL doesn't work. I wrote a simple PID controller that "feeds" the motor, but as soon as motors start turning, the robot turns off.

            ...

            ANSWER

            Answered 2021-Oct-03 at 14:33

            It is probably not the software. Your power supply is not sufficient or stable enough to power your motors and the Raspberry Pi. It is a very common problem. Either:

            • Use separate power supplies which is recommended
            • Or Increase your main power supply and use some short of stabilization of power

            What power supply and power configuration are you using?

            Source https://stackoverflow.com/questions/69425729

            QUESTION

            How to set up IK Trajectory Optimization in Drake Toolbox?
            Asked 2021-Oct-16 at 18:09

            I have read multiple resources that say the InverseKinematics class of Drake toolbox is able to solve IK in two fashions: Single-shot IK and IK trajectory optimization using cubic polynomial trajectories. (Link1 Section 4.1, Link2 Section II.B and II.C)
            I have already implemented the single-shot IK for a single instant as shown below and is working, Now how do I go about doing it for a whole trajectory using dircol or something? Any documentation to refer to?

            ...

            ANSWER

            Answered 2021-Oct-16 at 18:09

            The IK cubic-polynomial is in an outdated version of Drake. You can check out https://github.com/RobotLocomotion/drake/releases/tag/last_sha_with_original_matlab. In the folder drake/matlab/systems/plants@RigidBodyManipulator/inverseKinTraj.m

            Source https://stackoverflow.com/questions/69590113

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install openTCS-NeNa

            NOTE: The OpenTCS-ROS2 driver has only been tested on Ubuntu 20.04 in combination with the Turtlebot 3 with ROS Foxy in the Gazebo simolation environment. What to do next? You can now, for instance, create a transport order.
            Install ROS2 Foxy, see instructions here. unzip -d ~/ ~/openTCS-NeNa/build/distributions/openTCS-NeNa-<version>-bin.zip
            Install the Turtlebot 3 environment for ROS2 Foxy, see instructions here. Make sure you follow the instructions for ROS 2 Foxy by selecting the right tab.
            The software is tested with Gradle v6.8.1 in correspondence to OpenTCS 5.0. If you experience issues with older or newer Gradle editions, please install Gradle 6.x. You can check your current Gradle version using a linux command: gradle -v. Gradle v6.8.1 can be downloaded on the Gradle Releases page.
            it is mandatory to have JDK 13 installed. The current version of OpenTCS uses JDK 13, and therefore is OpenTCS-NeNa also uses JDK 13. Oracle JDK and OpenJDK both work. Make sure that both you Java executor (java) and Java compiler (javac) are set to JDK 13. Check your Java executor with (java -version) and Java compiler (javac -version). Both should return version 13. If you have a different Java version, you should install JDK 13 and set the default Java version ([here](instructions https://askubuntu.com/questions/121654/how-to-set-default-java-version)).
            Download the OpenTCS-NeNa RELEASE 2.0 binary at the releases page.
            Extract the binary to a location of choice: unzip -d ~/ ./openTCS-NeNa-5.0.0-RELEASE_2.0-bin.zip
            Start the OpenTCS Kernel Open new terminal: CTRL + ALT + T Change working directory to the OpenTCS Kernel: cd ~/openTCS-NeNa-5.0.0-RELEASE_2.0-bin/openTCS-NeNa-Kernel. Start kernel: sh startKernel.sh
            Start the OpenTCS Plant Overview: Open new terminal: CTRL + ALT + T Change working directory to OpenTCS Plant Overview: cd ~/openTCS-NeNa-5.0.0-RELEASE_2.0-bin/openTCS-NeNa-PlantOverview/. Start PlantOverview: sh startPlantOverview.sh
            In the Plant Overview, load the Turtlebot3 Example Project File-> Load Model... (CTRL + L). Look for the following file that is included in the code source base: openTCS-NeNa-Documentation/src/docs/turtlebot3_world_example_plant/example_model_scaled.xml. If you found it, open it!
            Persist the loaded model to the OpenTCS Kernel: File-> Persist model in the kernel (ALT + P).
            Switch the Plant Overview to ‘operating mode’: File-> Mode-> Operating mode (ALT + O).
            Start the OpenTCS Kernel Control Center. Don't close the Kernel and Plant Overview! Open new terminal (CTRL + ALT + T) Change working directory to OpenTCS Kernel Control Center: cd ~/openTCS-NeNa-5.0.0-RELEASE_2.0-bin/openTCS-NeNa-KernelControlCenter/. Start kernel: sh startKernelControlCenter.sh
            On the upper tab, select Vehicle Driver
            Double click on the first vehicle in the list (‘Bus1’) and open the ‘ROS2 Options’ panel.
            Enable the driver. You can specify a custom namespace if you have multiple ROS2 robot instances. If you only have one robot, you can leave it empty. The default domain ID for ROS2 Foxy is 30, if you are unsure what this means then you should leave it on 30.
            Set the initial position of the Turtlebot by clicking on Set Initial Point. Select point-hengelo, which is the closest point to Turtleot's default starting point. This step can be skipped if you have already set the initial point in RViz.
            The AGV is now ready to be used. You can test it by pushing the ‘Dispatch to coordinate’ button.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/nielstiben/openTCS-NeNa.git

          • CLI

            gh repo clone nielstiben/openTCS-NeNa

          • sshUrl

            git@github.com:nielstiben/openTCS-NeNa.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Robotics Libraries

            openpilot

            by commaai

            apollo

            by ApolloAuto

            PythonRobotics

            by AtsushiSakai

            carla

            by carla-simulator

            ardupilot

            by ArduPilot

            Try Top Libraries by nielstiben

            Cryptocurrency-Watcher

            by nielstibenJava

            02450-Deep-Learning-Project

            by nielstibenJupyter Notebook

            DTU-MLOP-2

            by nielstibenPython

            MLOPS-Project

            by nielstibenPython

            j--

            by nielstibenJava