User Tools

Site Tools



Here we collect a list of frequently asked questions:

Q: How to start a simulation?
A: Creating a Simulation

Q: How to create a robot?
A: Creating a Robot

Q: How to create a controller?
A: Creating a Controller

Q: How do I add variables to be visible/configurable from the console?
A: Short answer: You use addParameter() or addParameterDef() from the Configurable interface.
Long Answer: Configurable

Q: How do I add variables to be visible in Guilogger/MatrixViz or written to file?
A: Short answer: You use addInspectableValue() or addInspectableMatrix() from the Inspectable interface.
Long Answer: Inspectable

Q: How do I record a video?
A: Press Ctrl+r in the graphical window to start/stop video recording. A new directory with single frames will be created (reported on the console). Afterwards, go into this directory and use the script (see encodevideo). The videos are always in sync with the simulation, even if your computer is to slow. Just set the speed you want (realtimefactor) and the final video will have the correct timing. Do not use maximal speed (realtimefactor=0). Also all agents are logged and tracked automatically (and the files go into the video folder) The logfiles contain stamps for each frame, such that it can be perfectly associated. You can use it for instance with matrixviz to generate a video of the matrices.

Q: How do I attach additional sensors?
A: You can attach to each robot sensors (and even motors) at creation time. You create the robot as usual. Then you attach sensors by providing an Attachement description that decides to which primitive/joint the sensor/motor is attached. Here an example: robot→addSensor(std::make_shared<SpeedSensor>(1), Attachment(-1)); Attachement(-1) means main primitive. Note the generation of smart pointers with make_shared. See Sensor class and its subclasses. Examples: ode_robots/simulations/tests/torque_sensor/main.cpp

Q: My robot is exploding, what can I do?
A: It typically means the robot is wrongly constructed. Here is what you can do ./start -pause -allkeys then by pressing w in the graphical window allows you to see the wire frame, which may help to see how joints are aligned. Make sure the initial position of the segments is such that they do not intersect (except when connected by a joint) and the joint angles are within the ranges that are typically set by the servos motors. Another typical reason is if your motors are far too strong and/or your segments are too light. You can temporary try with simstepsize=0.0001 to see how things move. This typically avoids falling apart, but is not a final solution.

Q: How to attach a camera to a robot?
A: You can attach one ore more cameras to any robot. You need to create a Camera and a CameraSensor. The latter converts the frames from the camera into a vector of sensor values. A more detailed description see Camera.

Q: How to change the wiring of sensors and motors between robot and controller?
A: You have to define your own wiring as an subclass of AbstractWiring. Check the implementations of One2OneWiring (selforg/wirings/one2onewiring.cpp) and CopyWiring (selforg/wirings/copywiring.cpp)

Q: How to record/display the trajectory of a robot?
A: Use after the initialization of the agent:


see also TrackRobot and the example here ode_robots/examples/sphericalrobot/main.cpp. This will track the main object of a robot (returned by getMainPrimitive()). You can also track individual segments of the robot by agent→addTracking(primitiveIndex, TrackRobot(…), color).

Q: How to reset/relocate a robot?
A: you can use Ctrl+h to relocate a robot to its initial position, however only the main primitive is brought in the same initial configuration. You can do it programatically by storing the initial position (and configuration) and restoring it later. Then you can also move/orient the robot of you wish.

  robot->place(TRANSM(0.0, .0, 1.0));

Then later, e.g. in command() you reload and reorient the robot

  robot->moveToPose(ROTM(M_PI/2,0,0,1)*TRANSM(0.0, .0, 1.0));

Q: How to use the derivative sensor?
A: The DerivativeSensor computes the time derivative of a given sensor. The sensor of which to compute the derivative is an input parameter to the constructor. For example, the derivative sensor can be used to model an acceleration sensor:

DerivativeSensor* accSensor = new DerivativeSensor(new SpeedSensor(1.0, SpeedSensor::TranslationalRel, SpeedSensor::XYZ), 1.0);

The second parameters allows a rescaling of the derivative by a given scaling factor. This can be useful, if the derivatives are small. If the original sensor returns more than one value, the derivatives will be stored in the same order as the unmodified sensor values.
As the derivative sensor inherits from Sensor, it can be attached like any other sensor.

Q: How to use the range finder?
A: The RangeFinder class provides a more convinient interface to the RaySensorBank. The main purpose of this interface is to be able to easily create a range finder, that is several ray sensors at certain, equidistant angles. The angle of a beam is measured with respect to the orientation of the Primitive to which the range finder is attached. Thus, a beam with an angle of 0 is in the same direction as the underlying primitive. A beam with an angle of 90 points orthogonally to the left of the orientation of the primitive. To create a range finder with 15 beams in the range of [90, -90] attached to a primitive body:

rangeFinder->registerSensorRange(15, M_PI/2., -M_PI/2., maxRange, height, RaySensor::rayDrawMode drawMode);

There are three additional parameters. maxRange is the maximum range of each beam, drawMode specifies how the beam is visualized (see RaySensor). The parameter height moves the range finder upwards to avoid unwanted collisions between beams and other parts of the robot.

faq.txt · Last modified: 2014/09/22 14:30 by georg