# PeTrack - Usage

This short introduction only illustrates the possibilities and the main functionality of the software PeTrack. The main window is shown adjoining; clicking on the picture will enlarge it to see the general user interface. The top (stereo) and bottom (mono) video on the overview page of PeTrack illustrates the process of trajectory extraction.

After importing a video, an image sequence or already a complete project the first image will be shown in the top left area of the application. The navigation through the image stream is done below the image. On the right of the image zooming and rotation is possible. The status line in bottom-most area gives information about the pixel under the mouse. It displays the color, pixel row and column and the real world position in the specified height, if the calibration is done correctly. The main settings for the steps of trajectory extraction have to be done on the right tabs calibration, recognition, tracking and analysis, corresponding to the steps of the processing pipeline of the trajectory extraction. These settings are briefly discussed below.

More information especially how the single steps work can be found in a book and the papers Automatische Erfassung präziser Trajektorien in Personenströmen hoher Dichte, Collecting pedestrian trajectories and Automatic Extraction of Pedestrian Trajectories from Video Recordings.

The software can be used interactivly or command line oriented for the automatic generation of trajectories for a sequence of video recordings. The command line options and key bindings can be found on the help pages of the software. The im- and export of project files, image sequences, video recordings and trajectories is possible.

Using the software needs good understanding of the extraction process and the same good conditions as can be seen in the pictures and videos.

### I. Calibration

The calibration tab allows some colour correction and an adding of a border, if the picture size increases during the undistortion or tilting of the picture. An adaptive background subtraction is possible, but not needed for the detection with markers.

The intrinsic parameters of the camera model can be manually entered, and some assumptions can be chosen, like the specification of quadratic pixel, the matching of centre of the undistortion and the picture, and the consideration of the tangential distortion. The automatic calculation of the intrinsic parameters is also included. The images for the automatic calibration have to show the pattern recorded by the same camera and objective as used for the experiment.

The extrinsic parameters of the recorded scene can be entered by positioning a coordinate system on the floor and specifying the altitude of the recording camera, if the value determined by the intrisic calibration is not good enough. Also an automatic calculation of the extrincic parameters using corresponding points is possible. As far as the heads of every person can be seen also slanted views are supported.

For verification an adjustable alignment grid can be overlaid.

### II. Recognition

The recognition tab allows to set the frequency of the recognition task and the region where the detection is performed. Different markers (e.g. with height information, head direction) have been implemented with variable options.

With monocular cameras only marked pedestrians can be handled automatically. For unmarked pedestrians a special stereo camera has to be used or a semi-automatic procedure with manual detection and automatic tracking can be choosen.

### III. Tracking

In the tracking tab the repetition of the tracking and the quality level for the repetition can be set. The possibly selected extrapolation will determine big differences in the tracking path and displace the tracked point with an extrapolated one.

Also the automatic generation of all trajectories and their im- and export can be initiated here. The switch for inserting missing frames allows the insertion of frames, which have been missed while recording. The other settings adjust only the visualization of the overlay of the trajectories in the video.

The pyramidal search regions can be set by size and level. The default size of the highest pyramid level is sixty percent greater than the average head size and four levels are calculated.

### IV. Analysis

The last tab for analysis purpose allows some direct analysing tasks of the trajectories. Up to now only the velocity can be displayed. The green dot cloud shows the individual velocity and the blue line the averaged velocity of every frame.

## Further help

Project examples come with the installation. Command line options and key bindings are described in the help system of the software. This brief documentation of using PeTrack cannot answer all questions. Thus you may contact the author before setting up experiments and the automatic extraction with PeTrack.

## Utility

combine in1.txt in2.txt out.txt

coming with the PeTrack installation merges trajectories from in1.txt (first crossed camera) and in2.txt (second crossed camera) to out.txt|out.dat with least error. Please use the command line option [-help|-?] for further help.

## Media

• Pattern for automatic calculation of the intrinsic parameters for the calibration:
• Video from a part of one experiment of us for testing:
• Corresponding PeTrack project for preceding video file: