AutoRCCar | OpenCV Python Neural Network Autonomous RC Car
kandi X-RAY | AutoRCCar Summary
kandi X-RAY | AutoRCCar Summary
test/ rc_control_test.py: RC car control with keyboard stream_server_test.py: video streaming from Pi to computer ultrasonic_server_test.py: sensor data streaming from Pi to computer model_train_test/ data_test.npz: sample data train_predict_test.ipynb: a jupyter notebook that goes through neural network model in OpenCV3. raspberryPi/ stream_client.py: stream video frames in jpeg format to the host computer ultrasonic_client.py: send distance data measured by sensor to the host computer. arduino/ rc_keyboard_control.ino: control RC car controller. computer/ cascade_xml/ trained cascade classifiers chess_board/ images for calibration, captured by pi camera. picam_calibration.py: pi camera calibration collect_training_data.py: collect images in grayscale, data saved as *.npz model.py: neural network model model_training.py: model training and validation rc_driver_helper.py: helper classes/functions for rc_driver.py rc_driver.py: receive data from raspberry pi and drive the RC car based on model prediction rc_driver_nn_only.py: simplified rc_driver.py without object detection. Traffic_signal trafic signal sketch contributed by @geek111.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Handle video detection
- Detects a cascade classification
- Calculate the distance to the target point
- Predict the model
- Stop stream
- Collect the images for training
- Write data to the stream
- Start the sensor stream
- Create a video stream
- Drive the video
- Load training data
- Drive forward
- Train the model
- Create neural network
- Evaluate the model s accuracy
- Saves the model to path
AutoRCCar Key Features
AutoRCCar Examples and Code Snippets
Community Discussions
Trending Discussions on AutoRCCar
QUESTION
I am trying to recreate this project. What I have is a server (my computer), and a client (my raspberry pi). What I am doing differently than the original project is that I am trying to use a simple webcam instead of a raspberry pi camera to stream images from my rpi to the server. I know that I must:
- Get opencv image frames from the camera.
- Convert a frame (which is a numpy array) to bytes.
- Transfer the bytes from the client to the server.
- Convert the bytes back into frames and view.
Examples would be appreciated.
self_driver.py
...ANSWER
Answered 2018-Aug-19 at 20:59You can't just display every received buffer of 1-1024 bytes as an image; you have to concatenate them up and only display an image when your buffer is complete.
If you know, out of band, that your images are going to be a fixed number of bytes, you can do something like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install AutoRCCar
Install miniconda(Python3) on your computer
Create auto-rccar environment with all necessary libraries for this project conda env create -f environment.yml
Activate auto-rccar environment source activate auto-rccar
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page