Nowadays, everyone receives packages regularly, and if you don’t notice the package right away, the “porch pirate” may steal your things. According to C+R research, 43% of Americans At least one package was stolen in 2020.
To solve this problem, I built a Raspberry-Pi power supply system that uses cameras and machine learning to determine if the package has been stolen from your home. When it detects a problem, it can sound an alarm, turn on the sprinkler to wet the criminal, and even shoot flour at the thief. techy’s points covers my project Previous news articles, But today I want to tell you how to do it yourself.
If you have never been exposed to machine learning before, this should be a simple enough project to open your eyes. We will use a type of computer vision called image classification to determine if there is a package at your front door. To train it, we will use a tool called Google Cloud AutoML, which eliminates a lot of the complexity behind training machine learning models.
What do you need for this project
First, we need to set up something to collect data for our machine learning model.
1. Set up your Raspberry Pi. If you don’t know how to do this, please check our on how to set up your Using Raspberry Pi for the first time Either How to set up a headless Raspberry Pi (No monitor or keyboard).
2. Insert your pi, install the base dependencies, and then Clone repository To your Raspberry Pi.
cd ~/ sudo apt-get update && sudo apt-get -y install git python3-pip && python3 -m pip install virtualenv git clone https://github.com/rydercalmdown/package_theft_preventor.git
3. Enter the training catalog and Set up a virtual environment.
cd package_theft_preventor/training python3 -m virtualenv -p python3 env
4. Activate your virtual environment and Install python requirements.
source env/bin/activate pip install -r requirements.txt
5. Set up your RTSP camera And point it to your front door. If you are using Wyze Cam V2, please flash the custom RTSP firmware (Instructions provided here).
6. Get the RTSP URL from the camera settings and Set it as a streaming URL In the training/Makefile file.
nano Makefile # update stream_url with your stream URL RTSP_URL=rtsp://username:firstname.lastname@example.org/live
7. Run code to test image collection. You should start to see images appear in the data directory.
8. Use code to collect images of the front door At different times of the day. The code takes a picture every 10 seconds, which should take into account various lighting and weather conditions. You need about 1000 unpackaged photos to get started.
# take photos of your door without packages make no-package-images
9. Once you have collected enough images without a package, we need Start shooting images with various packages. Use boxes and envelopes of various sizes in different positions and orientations around the door. When you have a large number of photos of different packages and the same number of unpackaged doors at different times of the day, you can start training.
10. Browse the training/data directory and Delete any photos that may have no results, Or may not be suitable for training.
11. Create a Google Cloud Storage bucket Store your images in it.You need a Google Cloud account, and gcloud command line tool Installed on the local machine.You need one more Google Cloud Project If this is your first time using Google Cloud.
# gsutil is installed with gcloud gsutil mb gs://your_bucket_name_here -p your-project-name-here -l us-central1
12. Set your bucket name to GCS_BASE in the training/Makefile.
nano Makefile # edit GCS_BASE=gs://your_bucket_name_here
13. Run the make generate-csv command Generate the CSV required for training.
14. Upload the generated CSV To your new bucket.
gsutil cp training_data.csv gs://your_bucket_name_here
15. Upload your picture To your new bucket.
gsutil cp -r data gs://your_bucket_name_here/data
16. Navigate to the Google Cloud AutoML Vision dashboard inside Google Cloud Console. Click “Start” Use AutoML Vision.
17. Click “New Data Set” And choose a name for this dataset. Select “Single Label Classification” As a model target, and Click “Create Data Set”.
18. Select “Choose a CSV file on cloud storage”, and Provide Google Cloud storage path Go to the CSV file uploaded in the box below and click Continue.
19. Google Cloud will return you to the import screen. After about 10 minutes, you will automatically go to the “Image” section of the dataset. Verify that all your images have been uploaded and correctly marked as package or no_package.
20. In the “Train” tab, Click “Train New Model” and choose a name. Then Select “Edge” So the model can be downloaded from Google Cloud after completion. Then click Continue.
twenty one. Click “Faster Forecast” Used for model optimization, because we will run on a Raspberry Pi with limited computing power.Then Click to continue.
twenty two. Accept the default suggestion for the node hourly budget, But please note that you need to pay for the time these machines train your model. At the time of writing, the cost per node hour is approximately US$3.15, so the training cost of this model should be slightly higher than US$12.
twenty three. Click the Start training button. After the training is complete, you will receive an email.
24. After the training is completed, Navigate to the “Test and Use” tab and download the model as a TF Lite file. As the destination, select the bucket where you save the training data, and then use the following command to download it. It will download dict.txt, model.tflite and tflite_metadata.json files. You now have a trained machine learning model that can recognize if there is a package at your door.
gsutil cp -r gs://your_bucket_name_here/model-export/ ./
Set up the Raspberry Pi package alarm system
1. Navigate to the root directory of the repository and run the installation command Install all lower-level and python-based requirements for the project.
cd ~/package_theft_preventor make install
2. Copy the model file you downloaded from your computer to your Raspberry Pi And use them to replace the existing files in the src/models directory.
# From your desktop machine mv training/model-export/dict.txt /home/pi/package_theft_preventor/src/models/dict.txt mv training/model-export/tflite_metadata.json /home/pi/package_theft_preventor/src/models/tflite_metadata.json mv training/model-export/model.tflite /home/pi/package_theft_preventor/src/models/model.tflite
3. Set STREAM_URL in Makefile The RTSP stream URL pointing to the camera at your door. This will be the same URL you used to train the model.
nano Makefile # Edit STREAM_URL=rtsp://username:password@camera_host/endpoint
4. Connect the VCC and ground pins of the relay board to the Raspberry Pi, Use board pins 4 (VCC) and 6 (ground) respectively.
5. Connect the data pins on the relay to the following Raspberry Pi BCM pinsYou can modify the sequence, just keep track of which channel each pin is connected to for later use.
Relay Pin 1 = Raspberry Pi BCM Pin 27 (Sprinkler Pin) Relay Pin 2 = Raspberry Pi BCM Pin 17 (Siren Pin) Relay Pin 3 = Raspberry Pi BCM Pin 22 (Air Solenoid Pin)
notes: In my project, I used a combination of 12v siren, 12v water spray controller and 12v air solenoid valve to trigger various alarms. For the purpose of this tutorial, we will connect only one siren-I don’t recommend connecting to anything other than real life use. If you want to connect other modules, follow the same three steps below.
6. Connect the positive terminal of the 12 volt power supply to the common port of relay channel 2.
7. Connect the 12v siren to the normally open port of relay channel 2.
8. Connect the other end of the alarm directly to the 12v power supply ground.
9. If you want to connect sprinklers, The wires in the 12v sprinkler controller Used in the exact steps above Relay channel 1, Connect the common terminal of relay channel 1 to the positive pole of the power supply, connect one end of the nozzle controller to the relay, and connect the other end of the nozzle controller to ground. This code independently controls sprinklers and alarms.
10. If you connect a sprinkler, Connect the water supply end to a pressurized water source And turn on the tap. Connect the other end to the hose leading to the sprinkler.
11. Connect the 12v power supply to the power supply.
12. Add your face photo to the src/faces directory as a .jpg file (optional). This allows the system to periodically check whether you are in the area and disarm the system accordingly.
13. Start the system and test it. You will see various log statements indicating the current state of the system:
make run # Starting stream - The system is connecting to the camera # System watching - The system is classifying images of your porch as package/no_package right now # System Armed - A package has been definitively detected, if it is removed the alarm will go off # Activating Alarm - The package has been removed, activating the alarm
14. Invite your friends to steal your package Test your new burglar alarm. I have a lot of fun in this area.
Once you are satisfied, your Raspberry Pi package theft detection system should be working.However, we recommend caution, because if it doesn’t work perfectly, you might end up with an alert