CheXNet-Keras | build CheXNet-like models | Machine Learning library
kandi X-RAY | CheXNet-Keras Summary
kandi X-RAY | CheXNet-Keras Summary
ChexNet is a deep learning algorithm that can detect and localize 14 kinds of diseases from chest X-ray images. As described in the paper, a 121-layer densely connected convolutional neural network is trained on ChestX-ray14 dataset, which contains 112,120 frontal view X-ray images from 30,805 unique patients. The result is so good that it surpasses the performance of practicing radiologists. If you are new to this project, Luke Oakden-Rayner's post is highly recommended.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create a CAM overlay
- Load an image
- Return the output layer corresponding to the given name
- Calculate ROCAUC score
- Return the training data
- Prepare the next epoch
- Prepare the dataset
- Create a Keras model
- Calculates the weights for each class
- Get the sample counts for each class
- Return y data
CheXNet-Keras Key Features
CheXNet-Keras Examples and Code Snippets
Community Discussions
Trending Discussions on CheXNet-Keras
QUESTION
I am trying to modify the network which is implemented here. This network uses chest x ray images as input and classifies it into 14 categories (13 types of diseases and no finding). The network does not take the patient age and gender as an input. So I want to provide the network with that information too. In short At the last 3 layers of the network is like the following:
...ANSWER
Answered 2019-May-24 at 09:36You can have a multi input model.
So instead of just using this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install CheXNet-Keras
Create & source a new virtualenv. Python >= 3.6 is required.
Install dependencies by running pip3 install -r requirements.txt.
Copy sample_config.ini to config.ini, you may customize batch_size and training parameters here. Make sure config.ini is configured before you run training or testing
Run python train.py to train a new model. If you want to run the training using multiple GPUs, just prepend CUDA_VISIBLE_DEVICES=0,1,... to restrict the GPU devices. nvidia-smi command will be helpful if you don't know which device are available.
Run python test.py to evaluate your model on the test set.
Run python cam.py to generate images with class activation mapping overlay and the ground bbox. The ground truth comes from the BBox_List_2017.csv file so make sure you have that file in ./data folder. CAM images will be placed under the output folder.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page