Wismut Labs

Cognition the Wismut Labs engineering blog

Multi-camera setup with OpenPTrack

The following is a guest post by Daniel Low.

This is the third post of a multi-part series about prototyping a 3D pedestrian tracking system with ZED cameras.

This post will outline the steps required to create a multi-camera setup building upon OpenPTrack. OpenPTrack supports a multi-camera setup using Kinect cameras, swissranger or PointGrey 3D-RGB cameras. However, we would be implementing it using ZED cameras as well.

Previously, we finished setting up OpenPTrack to run with a single ZED camera. This time, we will have to modify OpenPTrack files to include the ZED camera setup in their multi-camera scripts.

NTP

First, we have to install the dependencies required for a multi-camera setup. The main dependency would be the Network Time Protocol(NTP) package, which is a package which allows us to synchronise the clocks between multiple devices. This would allow the software to determine if detections across multiple devices are of the same person. You may install this package by simple executing the below command.

$ sudo apt-get install ntp

Once installed, you need to decide which of your devices is your “master” device (the device you will be running the tracking node on) and which are your “client” devices (simply running the detection nodes and nothing more). Please note that you can run both the tracking node and detection node from the same device, however, it may reduce the accuracy of the tracking. Check the IP address for each device by running the below command.

$ ipconfig

Take note of the IP addresses for each device before moving on. Next, will configure the ntp config files. You may follow the instructions in this link to do so. The config files can be found at /etc/ntp.conf.

Please note that there is a different configuration for the “master” device compared to the “client” devices. We will cover how to configure each of them briefly, however, not every line in the config file will be stated here. Please go to the OpenPTrack wiki link above to find the description of the full config file.

Master Configuration

As mentioned by the OpenPTrack wiki, there are 2 configurations. You may pick either configuration, but stick to the same configuration between “master” and “client” devices. We used configuration 2 in our setup. For the “master” device, you need to synchronise the clock with an external server.

While this is not required (you can use your own clock as a reference to synchronise all the devices), it still should be done for greater accuracy of tracking. You may find a list of ntp servers close to you from the ntp website.

Once you have identified a list of 2-3 servers close to you, add them into your ntp.conf file. The format you enter them into your config files should be similar to the following:

server 0.sg.pool.ntp.org
server 1.sg.pool.ntp.org
server 2.sg.pool.ntp.org
server 3.sg.pool.ntp.org

As a fallback, you should add your own device as a server, in case there is no internet access. You may do so by entering the following lines.

server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

Stratum is the rating for each server. NTP will synchronise your clock to the lowest rating server you are connected to. Most external servers have a rating of between 0-3, hence setting your own device with a rating of 10 would ensure if will not override the external servers when there is internet connection. More information can be found here.

Next, you would need to input a line stating

restrict 192.168.100.0 mask 255.255.255.0 nomodify 

Replace the IP address range and the subnet mask with your own device’s settings. This will restrict the connections to your “master” device to only those will that specific range of IP addresses. Nomodify prevents your “client” devices from changing the clock of the “master”. This ensures that your “master” is always acting as the point of reference for your “clients”.

Another point to note if you are using configuration 2 would be to enter the IP addresses of your “client” devices into the config file. You may do so by entering the following lines

peer 192.168.100.102 minpoll 4 maxpoll 6 iburst
peer 192.168.100.103 minpoll 4 maxpoll 6 iburst

Similarly, replace the peer IP address with those of your “client” devices. Please note that these lines are not needed for config 1. Now, you can move on to configure ntp for your “client” devices.

Client Configuration

If you are using config 1, the setup is relatively straightforward. Input the IP address of your master as the server and broadcast your “client” device.

server 192.168.100.101 iburst
broadcastclient

If you are using config 2, the configuration is simple as well. Input the IP address of your “master” device and any other “client” devices you will be running, and allow only the master to make changes to your clock. The below lines are an example of what to input.

restrict 192.168.100.0 mask 255.255.255.0
peer 192.168.100.101 minpoll 4 maxpoll 6 iburst
peer 192.168.100.103 minpoll 4 maxpoll 6 iburst

Now, you are done configuring NTP! Use the following commands to check if the package has been setup correctly (Please note it can take up to 10-15 minutes for the correct values to appear).

# for config 1
$ ntpq --peers

# for config 2
$ ntpq -p

Camera Network File

Next, we need to create a config file in OpenPTrack so it is able to identify the various devices and cameras you are using in your setup. A guide can be found here. For each device, add the following lines into your .bashrc file.

 ROS_MASTER_URI=http://192.168.100.101:11311/
 ROS_IP=192.168.100.102
 ROS_PC_NAME=PC1-Example

The Master URI would be the IP address of your “master” device, along with the port 11311. ROS IP and ROS PC NAME are the IP address of the current device you are using and the name you want it to be identified by in your system. Repeat this for every device. For your “master”, you may replace the IP address with localhost as such:

ROS_MASTER_URI=http://localhost:11311/

Once this is completed on all devices, you need to create a camera_network.yaml configuration file for the “master” device. This file will be created at ~/workspace/ros/catkin/src/open_ptrack/opt_calibration/conf. The file will contain information about the names of each device that will be running a camera and the various cameras in your network. An example of what the file contains is shown below.

network:
  - pc: "MASTER_1080"
    sensors:
      - type: zed
        id: "ZED_CAM_3175"
        serial: "SN3175"
  - pc: "BLACK_1080_2"
    sensors:
      - type: zed
        id: "ZED_CAM_3487"
        serial: "SN3487"
      - type: zed
        id: "ZED_CAM_3416"
        serial: "SN3416"

# Checkerboard parameters
checkerboard:
  rows: 7
  cols: 6
  cell_width: 0.085
  cell_height: 0.085

Please note that leading spaces are used instead of tab. For the network part, pc would be the names of each “client” device you have running a camera. This would include your “master” if you plan to use your “master” device for both tracking and detection nodes (and have a camera connected to your “master”). Furthermore, the name entered in pc has to match the ROS PC NAME you previously entered into the .bashrc file.

For the sensors part, you should enter the type of camera you are using, “zed” in our case, and an id and serial number for each camera. The main purpose of the id differentiate the cameras from one another. Hence, you can create an id which you want your camera to be identified by (it need not be the actual serial number).

The serial number would allow us to access the conf files for the ZED cameras (ie. for a ZED camera with serial number 3416, the conf file is named SN3416.conf) if we need to. Hence, please set the correct serial number for your ZED camera. We will make use of these values later.

Finally, the checkerboard parameters refer to the specifications of the checkerboard you are planning to use for multi-camera calibration. The checkerboard allows the software to identify the location of each camera relative to each other. Hence, you would need print a sufficiently big checkerboard which the cameras can identify from a distance. More information can be found here.

Please note as well that the rows and cols do not refer to the actual rows and columns of the checkerboard. They refer to the row and column intersections between each cell (ie. A 7x6 board would have rows and cols value of 6 and 5 respectively). Lastly, the cell width and height measurement is in metres. Once you have created the camera_network.yaml file, you are move on to the next section.

ZED ROS Wrapper

Moving on, we would need to modify the launch files for the ZED camera in the ROS Wrapper. In previous implementations (and the default wrapper provided), a fixed namespace is provided when launching the ZED camera. We would need to change it such that each camera has its own unique namespace. This can be done by accepting an argument and using that argument as a group namespace in the zed.launch file.

<launch>
  <arg name="group_namespace" default="zed"/>
  ...

  <group ns="$(arg group_namespace)">
    <include file="$(find zed_wrapper)/launch/zed_camera.launch">
      <arg name="group_namespace" value="$(arg group_namespace)"/>
      ...
    </include>
  </group>
</launch>

Next, if we want to use multiple cameras or multiple GPUs in 1 device (ie. A single device with 2 GPUs and running 1 camera per GPU) we would need to create arguments for the zed id (different from the id you created in the camera_network.yaml file) and gpu id to be passed into the zed launch files.

Both zed id and gpu id will allow ROS to access the devices in the order it is listed by Linux (0 for the first camera/gpu, 1 for the second camera/gpu etc), hence their values will depend on the number of GPUs and cameras you have attached to a single device. Do note that this is new in the v2.0 wrapper. The arguments can be created as follows:

<arg name="zed_id" default="0"/>
<arg name="gpu_id" default="-1"/>

Below would be an example of the full zed.launch file.

<launch>

  <arg name="svo_file" default=""/>
  <arg name="publish_frame" default="false"/>
  <arg name="zed_id" default="0"/>
  <arg name="group_namespace" default="zed"/>
  <arg name="gpu_id" default="-1"/>


  <group ns="$(arg group_namespace)">
    <include file="$(find zed_wrapper)/launch/zed_camera.launch">
      <arg name="svo_file" value="$(arg svo_file)" />
      <arg name="publish_frame" value="$(arg publish_frame)"/>
      <arg name="zed_id" value="$(arg zed_id)" />
      <arg name="group_namespace" value="$(arg group_namespace)"/>
      <arg name="gpu_id" value="$(arg gpu_id)"/>
    </include>
  </group>

</launch>

If you have noticed, most of the arguments we have introduced are simply passed onto zed_camera.launch. This is because we need those arguments to setup the ZED camera’s parameters as well. Below would be an example of how we use those arguments in zed_camera.launch:

<launch>
  <arg name="svo_file"/>
  <arg name="zed_id" default="0"/>
  <arg name="group_namespace" default="zed"/>
  <arg name="publish_frame"   default="false"/>
  <arg name="gpu_id" default="-1"/>

  <include file="$(find zed_wrapper)/launch/zed_tf.launch">
      <arg name="publish_frame"   value="$(arg publish_frame)"/>
      <arg name="camera"    value="$(arg group_namespace)"/>
  </include>
  <node name="zed_wrapper_node" pkg="zed_wrapper" type="zed_wrapper_node" output="screen">

    <!-- SVO file path -->
    <param name="svo_filepath" value="$(arg svo_file)" />

    <!-- ZED parameters -->
    <param name="zed_id"                value="$(arg zed_id)" />

    <param name="resolution"            value="2" />
    <param name="quality"               value="3" />
    <param name="sensing_mode"          value="1" />
    <param name="frame_rate"            value="60" />
    <param name="odometry_db"           value="" />
    <param name="openni_depth_mode"     value="0" />
    <param name="gpu_id"                value="$(arg gpu_id)" />

    <!-- ROS topic names -->
    <param name="rgb_topic"             value="rgb/image_rect_color" />
    <param name="rgb_raw_topic"         value="rgb/image_raw_color" />
    <param name="rgb_cam_info_topic"    value="rgb/camera_info" />
    <param name="rgb_frame_id"          value="/$(arg group_namespace)_current_frame" />

    <param name="left_topic"            value="left/image_rect_color" />
    <param name="left_raw_topic"        value="left/image_raw_color" />
    <param name="left_cam_info_topic"   value="left/camera_info" />
    <param name="left_frame_id"         value="/$(arg group_namespace)_current_frame" />

    <param name="right_topic"           value="right/image_rect_color" />
    <param name="right_raw_topic"       value="right/image_raw_color" />
    <param name="right_cam_info_topic"  value="right/camera_info" />
    <param name="right_frame_id"        value="/$(arg group_namespace)_current_frame" />

    <param name="depth_topic"           value="depth/depth_registered" />
    <param name="depth_cam_info_topic"  value="depth/camera_info" />
    <param name="depth_frame_id"        value="/$(arg group_namespace)_depth_frame" />

    <param name="point_cloud_topic"     value="point_cloud/cloud" />
    <param name="cloud_frame_id"        value="/$(arg group_namespace)_current_frame" />

    <param name="odometry_topic"        value="odom" />
    <param name="odometry_frame_id"             value="/$(arg group_namespace)_initial_frame" />
    <param name="odometry_transform_frame_id"   value="/$(arg group_namespace)_current_frame" />

  </node>

  <!-- ROS URDF description of the ZED -->
  <param name="zed_description" textfile="$(find zed_wrapper)/urdf/zed.urdf" />
  <node name="zed_state_publisher" pkg="robot_state_publisher" type="state_publisher">
    <remap from="robot_description" to="zed_description" />
  </node>

</launch>

You may note that we have used the group_namespace argument to name our frame ids. This is because the OpenPTrack system would calculate the relative position and orientation of each camera, and this will be identified by each camera’s frame id.

Each ZED camera publishes their orientation (because the ZED cameras have an odometry function) to their frame id so ROS is able to keep track which direction the camera is facing. If left as the default (zed_current_frame), each ZED camera would publish their orientation to the same id, which would cause issues.

(!!Please note, this seems to have been fixed in the latest version of the ROS wrapper, and files have been removed/added. You may not need to perform the fixes below.)

Furthermore, to address the frame id issue, we will have to make 2 more changes. First, open up the zed_wrapper_nodelet.cpp (in ~/workspace/ros/catkin/src/zed-ros-wrapper/src) and you will notice that the frame id is hardcoded in the publishTrackedFrame function (line 162). Hence, we will have to introduce a new variable to contain the frame id our launch files have. You should add string odometry_frame_id into the function arguments, and add the line transformedStamped.header.frame_id = odometry_frame_id;. Your function should now look similar to the one below.

void publishTrackedFrame(Eigen::Matrix4f Path, tf2_ros::TransformBroadcaster &trans_br, string odometry_frame_id, string odometry_transform_frame_id, ros::Time t) {
   
    geometry_msgs::TransformStamped transformStamped;
    transformStamped.header.stamp = ros::Time::now();
    transformStamped.header.frame_id = odometry_frame_id;
    transformStamped.child_frame_id = odometry_transform_frame_id;
    transformStamped.transform.translation.x = -Path(2, 3);
    transformStamped.transform.translation.y = -Path(0, 3);
    transformStamped.transform.translation.z = Path(1, 3);
    Eigen::Quaternionf quat(Path.block<3, 3>(0, 0));
    transformStamped.transform.rotation.x = -quat.z();
    transformStamped.transform.rotation.y = -quat.x();
    transformStamped.transform.rotation.z = quat.y();
    transformStamped.transform.rotation.w = quat.w();
    trans_br.sendTransform(transformStamped);
}

After you have changed the function, please remember to add the new argument into the function calls on lines 674 and 682 as well.

The other change we would have to make would be in the zed_tf.launch files. The default tf transform makes use of the reference frame “map” which works when there is only a single ZED camera running. However, when running a multi-camera setup with OpenPTrack, another reference frame is already used and all the cameras have to be mapped relative to that reference frame.

We cannot use the default “map” reference frame else we would end up with disconnected transform trees. Hence, we have to ensure that the camera transforms are published according to the reference frame used by OpenPTrack. To do so, you may change the zed_tf.launch to file to the following.

<launch>
  <arg name="camera" default="zed"/>
  <arg name="tf_prefix" default="" />
  <arg name="publish_frame" default="false"/>
  <arg name="pi/2" value="1.5707963267948966" />
  <arg name="optical_rotate" value="0 0 0 -$(arg pi/2) 0 -$(arg pi/2)" />

  <node pkg="tf2_ros" type="static_transform_publisher" name="$(arg camera)_base_link1"
    args="0 0 0 0 0 0 1 map $(arg tf_prefix)/zed_initial_frame" unless="$(arg publish_frame)"/>
  
  <node if="$(arg publish_frame)" pkg="tf2_ros" type="static_transform_publisher" name="$(arg camera)_base_link1"
    args="0 0 0 0 0 0 1 $(arg camera)_link $(arg tf_prefix)/$(arg camera)_initial_frame"/>

  <node if="$(arg publish_frame)" pkg="tf2_ros" type="static_transform_publisher" name="$(arg camera)_base_link2"
    args="$(arg optical_rotate) $(arg tf_prefix)/$(arg camera)_initial_frame $(arg tf_prefix)/$(arg camera)_tracked_frame" />
  
</launch>

The if and unless conditional statements are there to allow this launch file to function even with a single ZED camera, as the publish_frame argument will only be true in a multi-camera setup.

Recompile the wrapper using catkin_make and you should not encounter any errors. We can now move on to changing the files in OpenPTrack itself.

OpenPTrack

As OpenPTrack makes use of scripts to generate the files for multi-camera tracking, we need to add our ZED camera into those files.

Service class

First, we must add the ZED camera into the service/message class. Add the following lines into OPTSensor.srv file (found at ~/workspace/ros/catkin/src/open_ptrack/opt_msgs/srv)

uint8 TYPE_ZED=5
...
...
# if TYPE_ZED
string serial
uint8 index
...
...

This ensures that the service will accept 2 arguments (a string and a uint8 variable) when it identifies a device as a ZED camera. This 2 arguments would be the serial number we stated in camera_network.yaml and our zed id (this index starts at 0 and will increase if we have more than 1 zed camera attached to a device). We will make use of these variables to create the launch files for our ZED cameras.

Listener

Next, we need to change the listener.py files (found at ~/workspace/ros/catkin/src/open_ptrack/opt_calibration/apps). This script will create the launch files necessary to launch our ZED cameras for calibration and detection. The changes are as follows:

  1. At line 151, inside the handle_create_sensor_launch method, add the following lines at the end of the else if block.
    elif request.type == OPTSensorRequest.TYPE_ZED:
       file.write('  <arg name="sensor_name"    default="' + request.id + '" />\n')
       file.write('  <arg name="sensor_serial"  default="' + request.serial + '" />\n')
       file.write('  <arg name="sensor_index"      default="' + str(request.index) + '" />\n')
       file.write('  <!-- Launch sensor -->\n')
       file.write('  <include file="$(find zed_wrapper)/launch/zed.launch">\n')
       file.write('    <arg name="group_namespace"     value="$(arg sensor_name)" />\n')
       file.write('    <arg name="publish_frame"       value="true" />\n')
       file.write('    <arg name="zed_id"              value="$(arg sensor_index)" />\n')
       file.write('    <!-- uncomment the following line if using multiple GPUs in a Node -->\n')
       file.write('    <!--arg name="gpu_id"              value="$(arg sensor_index)" /-->\n')
       file.write('  </include>\n\n')
    
  2. At line 238, inside the handle_create_detector_launch method, add the following lines at the end of the else if block.
    elif request.type == OPTSensorRequest.TYPE_ZED:
       file.write('  <arg name="sensor_name"    default="' + request.id + '" />\n')
       file.write('  <arg name="sensor_serial"  default="' + request.serial + '" />\n')
       file.write('  <arg name="sensor_index"   default="' + str(request.index) + '" />\n')
       file.write('  <!-- Detection node -->\n')
       file.write('  <include file="$(find detection)/launch/detector_node_zed.launch">\n')
       file.write('     <arg name="sensor_name"              value="$(arg sensor_name)" />\n')
       file.write('     <arg name="sensor_serial"            value="$(arg sensor_serial)" />\n')
       file.write('     <arg name="zed_id"         value="$(arg sensor_index)" />\n')
       file.write('     <!-- uncomment the following line if using multiple GPUs in 1 node -->\n')
       file.write('     <!--arg name="gpu_id"             value="$(arg sensor_index)" /-->\n')
       file.write('     <arg name="ground_from_calibration"  value="true" />\n')
       file.write('  </include>\n\n')
    

    Point 1 and 2 allow us to create launch files for our ZED cameras using the script. As you may have noticed, we have included a part where you may comment out (ONLY when the launch file is created!). As the comment states, if you are using multiple GPUs, once the launch files are created (they will be named sensor_XX.launch and detection_node_XX.launch where XX is the name of the sensor), open each launch file and uncomment the line stated to make use of your multiple GPUs.

  3. With multiple ZED cameras, it would be nice to have a configuration file unique to each individual camera as well. To accomplish this, we will be copying the existing config file we have and appending a unique sensor name to the end of each file. The sensor name would correspond to the camera it will adjust the settings for. Hence, we need to import the package that lets us copy a file. Add the following line to the list of imports in listener.py.
    from shutil import copyfile
    

    Next, extract the file path to the parameter file by adding the following line at line 58. We will be passing on the file path to the script in later steps.

    self.detector_params_file = rospy.get_param('~detection_params_file')
    

    Next, copy the file. You can do this by adding the following lines at the end of the handle_create_detector_launch method (but before the return statement! About line 265).

    # copy a config file unique to each camera
    dstfile = self.detector_params_file[:-8] + request.id + ".yaml"
    copyfile(self.detector_params_file, dstfile)
    
    rospy.loginfo(dstfile + ' created!');  
    

    After you have made the above changes, save the file. We now have to pass our configuration file path to the script. To do this, open up the listener.launch file. It can be found at ~/workspace/ros/catkin/src/open_ptrack/opt_calibration/launch. Add the argument detection_params_file to the launch file. Your listener.launch file should look similar to the example below.

    <?xml version="1.0"?>
    <launch>
        
       <arg name="sensor_launchers_dir"   default="$(find opt_calibration)/launch" />
       <arg name="detector_launchers_dir" default="$(find detection)/launch" />
       <arg name="camera_poses_dir"       default="$(arg detector_launchers_dir)" />
       <arg name="detection_params_file"  default="$(find detection)/conf/ground_based_people_detector_zed.yaml" />
       <!-- Launching calibration initializer -->
       <node pkg="opt_calibration" type="listener.py" ns="$(env ROS_PC_NAME)"
             name="listener" output="screen" required="true">
                
         <param name="sensor_launchers_dir"   value="$(arg sensor_launchers_dir)" />
         <param name="detector_launchers_dir" value="$(arg detector_launchers_dir)" />
         <param name="camera_poses_dir"       value="$(arg camera_poses_dir)" />
         <param name="detection_params_file" value="$(arg detection_params_file)" />
       </node>
        
    </launch>
    

    Now, if we execute the scripts to create our camera launch files, the ZED camera launch files will be successfully created, and a unique configuration file will be created for each camera as well.

Calibration_initializer

Third, we need to edit the scripts which interact with our listener.py (and pass listener.py the parameters for our ZED cameras). These scripts are the calibration_initializer.py and the detection_initializer.py files. Both can be found at ~/workspace/ros/catkin/src/open_ptrack/opt_calibration/apps.

For the calibration initializer, we have to make the following changes to include our ZED camera in the files generated.

  1. Add the lines below into the createMaster(self) method at about line 147, at the the end of the else if block but inside the for loop.
    elif sensor['type'] == 'zed':
         file.write('    <param name="sensor_' + str(index) + '/type"         value="pinhole_rgb" />\n')
         file.write('    <remap from="~sensor_' + str(index) + '/image"       to="/$(arg sensor_' + str(index) + '_name)/rgb/image_rect_color" />\n')
         file.write('    <remap from="~sensor_' + str(index) + '/camera_info" to="/$(arg sensor_' + str(index) + '_name)/rgb/camera_info" />\n\n')
         file.write('<!-- entered zed sensor type -->\n\n') 
    
  2. Similarly, add the lines below at line 213, at the end of the else if block but inside the for loop as well.
    elif sensor['type'] == 'zed':
         file.write('    <param name="sensor_' + str(index) + '/type"         value="pinhole_rgb" />\n')
         file.write('    <remap from="~sensor_' + str(index) + '/image"       to="/$(arg sensor_' + str(index) + '_name)/rgb/image_rect_color" />\n')
         file.write('    <remap from="~sensor_' + str(index) + '/camera_info" to="/$(arg sensor_' + str(index) + '_name)/rgb/camera_info" />\n\n')
         file.write('<!-- entered zed sensor type -->\n\n')
    
  3. Edit the code in the __invokeService method at Line 233 to include the type ‘zed’ in the service call (this is why we have to first added ZED into the service class). You will have to create a counter as well to keep track of the number of ZED cameras running in a single device. This allows us to call the appropriate zed id and gpu id. A part of your __invokeService method should look like the code shown below.
    for pc in self.sensor_map:
       service_name = pc + '/' + local_service_name
       rospy.loginfo('Waiting for ' + service_name + ' service...')
       rospy.wait_for_service(service_name)
       zed_index = 0
       # For each sensor
       for sensor_item in self.sensor_map[pc]:
            
       # Create an OPTSensor message
       sensor_msg = OPTSensorRequest()
       sensor_msg.session_id = self.session_id
       sensor_msg.id = sensor_item['id']
       if sensor_item['type'] == 'kinect1':
          sensor_msg.type = OPTSensorRequest.TYPE_KINECT1
          if 'serial' in sensor_item:
             sensor_msg.serial = sensor_item['serial']   
       elif sensor_item['type'] == 'kinect2':
          sensor_msg.type = OPTSensorRequest.TYPE_KINECT2
          if 'serial' in sensor_item:
           sensor_msg.serial = sensor_item['serial']  
       elif sensor_item['type'] == 'sr4500':
          sensor_msg.type = OPTSensorRequest.TYPE_SR4500
          sensor_msg.ip = sensor_item['ip']
       elif sensor_item['type'] == 'stereo_pg':
          sensor_msg.type = OPTSensorRequest.TYPE_STEREO_PG
          sensor_msg.serial_left = sensor_item['serial_left']
          sensor_msg.serial_right = sensor_item['serial_right']
       elif sensor_item['type'] == 'zed':
          sensor_msg.type = OPTSensorRequest.TYPE_ZED
          sensor_msg.serial = sensor_item['serial']      
          sensor_msg.zed_index = zed_index
          zed_index += 1
        
       ...
    

    Once you have made the changes, save and exit the file.

Detection_initializer

As mentioned previously, there is an issue with the transforms of the ZED cameras being published. We previously changed the topic where the transforsm were being published to in order to link it to the reference frame used by OpenPTrack. Now, we would be adding that link to the reference frame used in OpenPTrack. Open up the detection_initializer.py file, and add the following lines at about line 100, inside the createTFLauncher method.

elif self.sensor_map[pc][sensor]['type'] == 'zed':
  file.write('  <node pkg="tf" type="static_transform_publisher" ')
  file.write('name ="' + sensor + '_broadcaster_2"\n')
  file.write('        args="0 0 0 0 0 0 ')
  file.write('/' + sensor + ' /' + sensor + '_link 100" />\n')

Save and close the file. We are nearly done!

Detector_node launch file

Lastly, we need to create a launch file that is called when we start a ZED camera used for detection. This launch file would create the detection node (along with the parameters for the ZED camera used) and then call the zed.launch file in the ROS wrapper.

You may also notice that this is the file that is included when we were creating the launch files in listener.py. This file needs to be created at ~/workspace/ros/catkin/src/open_ptrack/detection/launch. You may notice that there are similar launch files in this folder for each camera type (detector_XX.launch where XX is the type of camera).

Hence, we will hence what is contained in those files, but edit it so the ZED camera will run instead. Please note that this is different form the detector_depth_zed.launch file we previously created when running a single camera setup. Create a file called detector_node_zed.launch and place the following code inside it.

<?xml version="1.0"?>
<launch>

   <!-- Camera parameters -->
   <arg name="sensor_name"             default="zed"/>
   <arg name="sensor_serial"           default=""/>
   <arg name="intermediate_topic"      default="/detector/detections"/>
   <arg name="ground_from_calibration" default="false"/>
   <arg name="zed_id"          default="0"/>
   <arg name="gpu_id"          default="-1"/>
   

   <!-- Launch sensor -->
   <include file="$(find zed_wrapper)/launch/zed.launch"> 
     <arg name="group_namespace"   value="$(arg sensor_name)"/>
     <arg name="publish_frame"     value="true"/>
     <arg name="zed_id"            value="$(arg zed_id)"/>
     <arg name="gpu_id"      value="$(arg gpu_id)"/>
   </include>

   <!-- Launch ground based people detection node -->
   <node pkg="detection" type="ground_based_people_detector" name="ground_based_people_detector_$(arg sensor_name)" output="screen" respawn="true">
     <rosparam command="load" file="$(find detection)/conf/ground_based_people_detector_$(arg sensor_name).yaml" /> 
     <param name="classifier_file" value="$(find detection)/data/HogSvmPCL.yaml"/>
    <!-- <param name="pointcloud_topic" value="/camera/depth_registered/points"/> -->

     <param name="camera_info_topic" value="/$(arg sensor_name)/rgb/camera_info"/>
     <param name="output_topic" value="$(arg intermediate_topic)"/>
     <param name="pointcloud_topic" value="/$(arg sensor_name)/point_cloud/cloud"/>

     <param name="rate" value="60.0"/>  
     <param name="ground_from_extrinsic_calibration" value="$(arg ground_from_calibration)"/>
   </node>

</launch>

As you may notice, it will accept the arguments we created in the camera_network.yaml file and in the various python scripts we changed. These arguments will then be passed onto the zed.launch file, which would allow it to launch the correct camera with the settings we have configured.

Running the commands to calibrate multiple cameras and begin tracking

Finally, we can begin running the commands to calibrate our cameras and start multi-camera tracking. You can follow the steps outlined in the OpenPTrack wiki here for calibration using the checkerboard. Once you have performed that successfully, you may refer to the wiki page here for the commands to begin tracking.

As always, once you have started the tracking and detection nodes, you can change the settings in real time using rosrun rqt_reconfigure rqt_reconfigure. With this, you should be able to perform multi-camera pedestrian tracking using ZED cameras!

Read Part 1: Setting up a 3D pedestrian tracking system
Read Part 2: Implementing OpenPTrack on Ubuntu 16.04