Here we set up a development environment on Windows 10 for the TrafficCV program that we will deploy and test the code on Raspberry Pi.
Traffic speed detection is big business. Municipalities around the world use it to deter speeders and generate revenue via speeding tickets. But the conventional speed detectors, typically based on RADAR or LIDAR, are very expensive.
This article series shows you how to build a reasonably accurate traffic speed detector using nothing but Deep Learning, and run it on an edge device like a Raspberry Pi.
You are welcome to download code for this series from the TrafficCV Git repository. We are assuming that you are Python and have basic knowledge of AI and neural networks.
In the previous article, we discussed installation of the operating system on Raspberry Pi, securing it, and configuring it for remote access over WiFi via SSH, RDP, and X from a Windows 10 machine. In this article, we’ll go over setting up a development environment on Windows 10 for cross-platform computer vision and Machine Learning projects to run on our Pi. We’ll also examine the main elements of the TrafficCV program and its basic usage.
We’re going to be using Visual Studio Code and its Remote Development feature combined with its Python language support to code and debug our app on Pi while remaining within a familiar Windows environment. Although we target Raspberry Pi running Linux, the Python code we develop is cross-platform and runs without modification on both Linux and Windows.
Assuming you have a Python 3.7+ interpreter available in your $PATH or %PATH%, the steps to set up the TrafficCV Python environment on both Pi and Windows 10 are:
- Create and activate a virtual environment called cv (or trafficcv) :
python -m venv cv.
- Within the Python virtual environment, clone the TrafficCV repo from https://github.com/allisterb/TrafficCV.
- On Pi, first install the OpenCV native dependencies using apt-get.Then, on both Pi and Windows, install the TrafficCV Python dependencies by running
pip install -r requirements.txt. By default, Pi 4 uses piwheels.org as a repo for native ARM builds of libraries like OpenCV, so we should not have to compile anything from source.
- On both Pi and Windows, install the Python TensorFlow Lite runtime library using
- If you want to use the Coral USB accelerator, install the Edge TPU runtime on Pi or Windows.
- For running GUI apps on Pi from Windows, allow connections to be made from the Pi device to our X Server on port 6000, but only on a private subnet. This article describes the process of adding the Windows Firewall rule if needed, and the VcXsrv options to use for allowing remote connections from Pi.
- When all the dependencies have been installed, run
tcv --test in the TrafficCV project folder on Windows. This will test if OpenCV is installed properly. On Pi or on any Linux computer, you must pass the X server display that TrafficCV will connect to as the first parameter of the tcv script. For example: ./
tcv MYCOMP:0.0 --test will run the test function using the X server at
./tcv $DISPLAY --test will run it on the local display. This parameter isn’t used on a Windows machine as OpenCV will always use the local desktop display.
If all is well, when you run the test function on Pi, you should see an X display created on your development computer streaming video from the ArduCam camera.
Once our TrafficCV Python environment is set up, we can use the Remote SSH feature of VS Code to connect to our Pi and work on TrafficCV. We’ll use the Windows machine to develop our app and deploy the code to Pi for testing by
git fetch && git pull. One thing that we need to remember is to always activate our cv Python virtual environment on Pi or Windows before doing any development work because all our package dependencies are isolated in that environment.
TrafficCV is a cross-platform Python program that runs object detection models on live streams or videos of traffic to compute and extract traffic information, such as vehicle speed, vehicle class, and the number of vehicles passing through a Region of Interest (ROI). You specify the model to run using the
--model parameter, and the video source using the
--video parameter, together with an optional
--args parameter that specifies a comma-delimited set of model and detector arguments in the key=value form.
TrafficCV can run models on traffic videos from youtube and other video hosting sites using VLC which is installed by default on the Pi. The
vlc-stream scripts accept as a parameter a file or URL and then creates a Multipart-JPEG (MPJPEG) stream on the current computer on port 18223. MPJPEG is a simple way to stream Motion-JPEG (M-JPEG) encoded videos over HTTP that can be processed by OpenCV. Any video source which can be decoded by VLC can be transcoded and streamed to OpenCV allowing TrafficCV to analyze videos in many different formats and locations.
Now that our development environment is set up, we can run neural network models on the various video sources. In the next article, we’ll have a look at the details of the TrafficCV implementation and the various object detection models to use for detecting vehicles and calculating their speed. Stay tuned!