Getting started with MoRoTeCo Toolbox

MoRoTeCo Toolbox stands for Mobile Robot Team Control Toolbox. It is a suite of Matlab/simulink blocks. Most of them are written as a S-functions using simplified scheme in M-script. (hence, the long name Level 1 M-file S-functions). If you have even a moderate experience in writing m-files, you will have no problem in understanding code hidden behind block's mask.

How Moroteco toolbox fits into Your task

This figure depicts general control structure:
The plant is your robot, or set of robots. The controller is your computer, which is interfaced to a camera for plant measurement, RS-232 com-port for communicating with the plant, and Matlab/Simulink for signal processing and visualization.

Controller can usually be represented by this general structure:
MoRoTeCo toolbox provides simulink blocks that will aid you in constructing complete a controller:

Additionally, it provides simulation blocks that will imitate robot behavior.

This toolbox is in no means perfect: You are welcome to aid in it's development.


Installing the toolbox

Extract archive contents to directory of Your choice. Then add this directory and all the subdirectories to the Matlab search path: Matlab start->Desktop Tools ->Path ->Add with subfolders-> (select folder where you have extracted the toolbox) -> Save, then Close.
Next, instruct Matlab to update it's database with this new toolbox: Matlab Start->Desktop Tools->View Source Files -> Refresh Start Button.  Alternatively, you may restart Matlab.
If everything went properly, moroteco icon should be shown in matlab's menu start, help, demos, and simulink library browser:
=        =

Attention: To fully utilize moroteco toolbox you also need CMU1394 VisionTools Toolbox and RTMC9S12-Target toolbox. You need to add their folders to Matlab's search path. To do this, click File->set path->add with subfolders . . .  and select appropriate folders. You can download toolboxes mentioned above from University of Adelaide webpage: CMVison and RTMC9S12-Target

Example: simulated robot

Open and familiarize yourself with model demo_command_simple_robot.mdl


You can recognize parts of a plant-controller model: plant itself (robot simulation), controller, and supervisory functions (visualization and setpoint input).


All position coordinates are transmitted in pixels, and should not exceed VGA-window (640x480px); Angle is always in radians: -
For any new model, you should use a fixed-step solver. To select a fixed-step solver, go to simulation->configuration parameters->solver->type:fixed-step, ODE-1 (Euler); Use fixed-step size(fundamental sample time) 0.1s.
In order for the simulation to go for a long time, you can type "inf" into "stop time".


Controlling the robot: basics

robot has 3 important state variables: x,y position and orientation. You can use x and y coordinates to calculate distance from robot to target position:
Where d is distance to target, -, - denotes robot position and -,- is the target position coordinates.
To move robot towards target he must first perform a turn (?).  To calculate direction from robot position to target we use formula:
Robot must turn to align to that direction.
Distance and angle is calculated by calculate_distance_angle block.

Next, we calculate desired speed and turning rate for robot:

In this example a very simple principle is used: if distance to target>0, move forward; if robot is not oriented towards target, turn it. Speed and turn rate are limited by saturation blocks. There are additional dead zone blocks to stop robot from oscillating around target.

Angle of robot or direction should always be in range between -pi and +pi; note that when adding or subtracting two angles it is possible that result will be outside this range. Use wrap_angle block to wrap angle back to proper range.

Example: real robot control

Note: Following models require presence of two additional toolboxes - CMU1394_VisionTools and rtmc9S12_CW_R13 Real-Time Workshop toolbox. If you can't run the model, make sure that these toolboxes are on the matlab's search path.

Open demo_command_real_robot:
This model enables you to control actual robot. It consist of measurement submodel(Get robot coordinates), supervision submodel(Get desired position), decision making (generate simple trajectory, position feed-forward compensator/gate) and output actuator (com port transmitter block). This model demonstrates what how you can use moroteco toolbox.

Get robot coordinates submodel:


First block (combo . . . ) captures image, adds video underlay to visualization figure, and processes current image looking for markers. It encapsulates CMU1394_VisionTools to do that. It uses testcolors.txt and cameraconfig.txt located in current directory. Note that it only uses these configuration files from current matlab directory, so you have to set it properly. Output is a 160 element vector, which after reshaping, contains [is_valid, id, xc, yc, area]x32 rows.. is_valid indicates if  there is valid position marker in particular row. Id is the marker color id as seen in testcolors.txt : 1=first, 2=second and so on. xc, yc, are marker's centroids. Area is marker area in pixels. It can be used to discriminate too small or to big markers. For example, it happens sometimes that processing routine cathes highlight as a marker, but it will have area way bigger than a proper marker. You must configure this block by adjusting values in cameraconfig.txt and testcolors.txt. See block documentation in blocks list section for futher details.

Next block, Map markers to robot positions, takes marker list as an input. It uses process_object_config.m file to load object descriptions. It tries to look up objects positions and orientations. Here, objects are consisted of designator marker (single or double) and orientator marker. The result is object list: 64 element vector, which after reshaping contains [is_valid, x,y,angle]x16 rows. You must configure behavior of this block by adjusting values in process_object_config.m file. Refer to block documentation in blocks list section for more information.

Last block, Add_robot_symbol_to_vis_window, takes 3-element vector as an input: [x,y,angle]. It is used to visualize effect of image recognition procedures: if robot object was found in image, it will add a orientated icon to visualization window. You must specify unique robot ID in configuration window.


Get desired position submodel:


This submodel basically acts as a user interface. The first block, Get Command Point, adds a cursor ability to visualization figure.  It also adds buttons (R1, R2 . . .) in the top-left corner. You can specify button's numbers by adding them in configuration window, example: [1 2 3] or [1 4 9 12].
Block output is 5-element vector: [flip-flop trigger, Enabled togglebutton ID, x positon, y position, angle]. Flip-flop trigger changes it's value every time user clicks in visualization window. This can be used to trigger actions inside model. Enabled togglebuton ID contains ID of currently selected button. X position, y position, is selected position in pixels. Angle is in radians.
To use this block, click and hold in visualization figure to designate point, drag mouse in desired direction, then release mouse. Block's output will be updated.

Second block, Visualize passive object, simply adds icon of passive object to visualization figure. Passive object does not have orientation, hence block's input is [x position, y position].
It is used here to give user feedback after selecting a target point: it will be placed in last clicked position.

Generate simple trajectory submodel:


This submodel calculates distance to target, and angle by which robot must turn in order to face the target. The first block, "calculate distance and angle" takes 4-element vector: [x1 y1 x2 y2] as an input. Distance is in pixels. Angle is measured in reference to coordinate system, which means that it has to be substracted from robot orientation later on. "Wrap angle" is an simple utility that will keep substraction result within +-pi range. This will prevent from commanding robot to turn, for example by 3.5pi radians, whereas it should only turn -0.5pi radians.



Robot program documentation

Simplified version

Models for robot microcontroller board have a "mc" suffix, and it is "pc" for pc side.
This is the most basic model for microcontroller:


"FreePortComms" blocks receive messages trough RS-232 port. These values are used to directly set pulse width using PWM block.
Digital input block inputs impulses from rotational encoders. Result is accumulated in "RW counter" block and displayed.
You can use this model in external mode, to see that encoders are actually working and generating impulses.
This particular model is mainly usable as a starting point for more sophisticated robot program.
This is an PC counterpart to this model:
Note, that in order for communication blocks to work, you have to use same baudrate, channel number and data type. Especially, changing these values wrong in receiver blocks can cause matlab to crash. There is also probably the buffer overflow bug in RTM toolbox: if you try to transmit/receive to many values at once, transmitted data is likely to be distorted.

Rotary speed estimation within limited timeframe using low-resolution encoder

To estimate speed (or to take a derivative of any process value) is not a trivial problem in control systems. Referring to definition, one would take a difference of process value taken in very small time interval. But this presents a problem because:

  • the smaller time interval, the bigger estimation error
  • the bigger time interval, the smaller update rate and effectively larger delay from start of measurement to obtaining result.

Measurement delay and measurement error are both equal enemies of control engineer. Therefore some kind of tradeoff has to be taken.
In robot we use in GCU there are 1-channel rotary encoders installed, with resolution of 4imp/revolution. This low resolution makes speed estimation especially hard.
Proposed solution is to measure pulse width within a limited interval. If there is no pulse in this interval, estimator should report that speed is below measurement range. Therefore speed controller will be updated with this information at least once per this maximum interval. If there is a pulse within interval limit, controller will be immediately updated with its length, effectively increasing control bandwidth for higher speeds.
As microcontroller runs at 1000cycles/s, I have selected 5Hz minimum control rate. Impulses from encoder are used to reset a counter. If the counter counts up to 200 (that's 1/5s) estimator will output that value and start to count again.  If resetting impulse arrives earlier, output will be updated with current counter value and counting will start again.
This model demonstrates operation of speed (which is proportional to frequency) estimator:


Advanced robot program - with internal control loop

Open demo_advanced_ctrl_1_mc.
This demo is an example of a more sophisticated robot control program. This program can cooperate with demo_command_real_robot.mdl or demo_advanced_ctrl_1_pc.mdl, which you can use to test robot's operation before linking it with camera.
The PC counterpart to this model is:


This model works as follows:
Commands are received trough rs-232 serial interface with FreePortComms_RX block. The format of a command is a 5-element vector: [forward distance, (reserved), turn, motor voltage, trigger].
Next, encoder impulses are captured by "left enc" and "right enc" blocks:
Encoder impulses are then mixed to produce measurement of forward and rotational motion signals:
Next, these signals decrement presetable counters of forward and rotational motion:
Counters are reseted to values received from master computer, but only on trigger edge.
Next, there is a very simple feed-forward controller: if forward or rotate counter value is greater than zero, enable rotation of a wheel. Note that forward counter can be only positive, and rotation counter can be positive or negative.
Finally, signals from counters are mixed to determine if a particular wheel should rotate or not:
"forward" command is fed to the "+" inputs of adders; "rotate" command is fed to "-" input for left wheel and "+" input for right wheel. Result is clipped to 0-1 range. The result is, that if there is a forward command, both wheels will rotate, and if there is a rotate command, one wheel will be suppressed, so the robot will make a turn.

In the last stage, "turn the wheel on" signal is multiplied by desired motor voltage, which was received by communication block:


Additionally, this model features speed estimation blocks:
But speed is not used here for control, it is only reported back to PC.