BraTs | Keras implementation for BraTs | Machine Learning library
kandi X-RAY | BraTs Summary
kandi X-RAY | BraTs Summary
PyTorch & Keras implementation for BraTs (Brain Tumor Segmentation)
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Train the model
- Generate dataset
- Unet network
- Loads a model
- Print a progress bar
- Adjusts image data
- Format time in human readable format
- Return data loader
- Loads a checkpoint from a file
- Get a logger
- Save the current optimizer state
- Converts a nii file to a PNG image
- Test the experiment
- Post processing
- Saves the result
- Create CRF image
- Dilate morphology
- Test resnet
- Score dice loss loss
- Calculate the dice coefficient
BraTs Key Features
BraTs Examples and Code Snippets
Community Discussions
Trending Discussions on BraTs
QUESTION
I am quite new to the field of semantic segmentation and have recently tried to run the code provided on this paper: Transfer Learning for Brain Tumor Segmentation that was made available on GitHub. It is a semantic segmentation task that uses the BraTS2020 dataset, comprising of 4 modalities, T1, T1ce, T2 and FLAIR. The author utilised a transfer learning approach using Resnet34 weights.
Due to hardware constraints, I had to half the batch size from 24 to 12. However, after training the model, I noticed a significant drop in performance, with the Dice Score (higher is better) of the 3 classes being only around 5-19-11 as opposed to the reported result of 78-87-82 in the paper. The training and validation accuracies however, seem to be performing normally, just that the model does not perform well on test data, I selected the model that was produced before overfitting (validation loss starts increasing but training loss still decreasing) but yielded equally bad results.
So far I have tried:
- Decreasing the learning rate from 1e-3 to 1e-4, yielded similar results
- Increased the number of batches fed to the model per training epoch to 200 batches per epoch, to match the number of iterations ran in the paper since I effectively halved the batch size - (100 batches per epoch, batch size of 24)
I noticed that image augmentations were applied to the training and validation dataset to increase the robustness of the model training. Do these augmentations need to be performed on the test set in order to make predictions? There are no resizing transforms, transforms that are present are Gaussian Blur and Noise, change in brightness intensity, rotations, elastic deformation, and mirroring, all implemented using the example here.
I'd greatly appreciate help on these questions:
By doubling the number of batches per epoch, it effectively matches the number of iterations performed as in the original paper since the batch size is halved. Is this the correct approach?
Does the test set data need to be augmented similarly to the training data in order to perform predictions? (Note: no resizing transformations were performed)
ANSWER
Answered 2021-Mar-18 at 21:48- Technically, for a smaller batch the number of iterations should be higher for convergence. So, your approach is going to help, but it probably won't give the same performance boost as doubling the batch size.
- Usually, we don't use augmentation on test data. But if the transformation applied on training and validation is not applied to the test data, the test performance will be poor, no doubt. You can try test time augmentation though, even though it's not very common for segmentation tasks https://github.com/qubvel/ttach
QUESTION
I have an MHA file and when I write
...ANSWER
Answered 2021-Feb-15 at 21:09According to the documentation, load(image) "Loads the image and returns a ndarray
with the image’s pixel content as well as a header object."
Further down in medpy.io.load
it says that image_data is "The image data as numpy array with order x,y,z,c.".
Edit: Because I was kind of curious to see what is actually in this file, I put together a quick script (heavily based on the slider demo) to take a look. I'll leave it here just in case it may be useful to someone. (Click on the "Layer" slider to select the z-coordinate to be drawn.)
QUESTION
I am working on a biomedical image segmentation project. I am using U-net model for the job. The problem is, when i train the model, the validation loss doesn't seem to be practical.
I used the dice_coef_loss as loss function, as well as, in the metric. The result of the training is the below graph. The graph clearly shows that validation loss is not following my loss function, cause the two graphs are distinguishable. Though, the train loss does follow the train dice_coef_loss values.
(The first image from the left is training and validation loss, third one is tarining and validation dice_coef_loss as metric)
(Sorry i am not yet eligible to embed an image, please check the link)
Here is my model ...ANSWER
Answered 2020-Jun-29 at 10:49As, ZabirAlNazi suggested it was a problem of the used library. Changing the imports from keras to tensorflow.keras solved that issue.
QUESTION
i am working on segmentation of brain tumor dataset provided by BraTS challenge based on u-net. After defining the model at the stage of training an error occurred.
The error is:
...ANSWER
Answered 2019-Sep-17 at 19:17Older versions of Keras require you to use 'inputs' instead of 'input' as the name of the first input parameter for Model. Check if you can update your Keras version (current stable is 2.2.5) or try this line:
QUESTION
I'm a student in medical imaging. I have to construct a neural network for image segmentation. I have a data set of 285 subjects, each with 4 modalities (T1, T2, T1ce, FLAIR) + their respective segmentation ground truth. Everything is in 3D with resolution of 240x240x155 voxels (this is BraTS data set).
As we know, I cannot input the whole image on a GPU for memory reasons. I have to preprocess the images and decompose them in 3D overlapping patches (sub-volumes of 40x40x40) which I do with scikit-image
view_as_windows and then serialize the windows in a TFRecords file. Since each patch overlaps of 10 voxels in each direction, these sums to 5,292 patches per volume. The problem is, with only 1 modality, I get sizes of 800 GB per TFRecords file. Plus, I have to compute their respective segmentation weight map and store it as patches too. Segmentation is also stored as patches in the same file.
And I eventually have to include all the other modalities, which would take nothing less than terabytes of storage. I also have to remember I must also sample equivalent number of patches between background and foreground (class balancing).
So, I guess I have to do all preprocessing steps on-the-fly, just before every training step (while hoping not to slow down training too). I cannot use tf.data.Dataset.from_tensors()
since I cannot load everything in RAM. I cannot use tf.data.Dataset.from_tfrecords()
since preprocessing the whole thing before takes a lot of storage and I will eventually run out.
The question is : what's left for me for doing this cleanly with the possibility to reload the model after training for image inference ?
Thank you very much and feel free to ask for any other details.
Pierre-Luc
...ANSWER
Answered 2019-Apr-14 at 01:14Finally, I found a method to solve my problem.
I first crop a subject's image without applying the actual crop. I only measure the slices I need to crop the volume to only the brain. I then serialize all the data set images into one TFRecord file, each training example being an image modality, original image's shape and the slices (saved as Int64 feature).
I decode the TFRecords afterward. Each training sample are reshaped to the shape it contains in a feature. I stack all the image modalities into a stack using tf.stack()
method. I crop the stack using the previously extracted slices (the crop then applies to all images in the stack). I finally get some random patches using tf.random_crop()
method that allows me to randomly crop a 4-D array (heigh, width, depth, channel).
The only thing I still haven't figured out is data augmentation. Since all this is occurring in Tensors format, I cannot use plain Python and NumPy to rotate, shear, flip a 4-D array. I would need to do it in the tf.Session()
, but I would rather like to avoid this and directly input the training handle.
For the evaluation, I serialize in a TFRecords file only one test subject per file. The test subject contains all modalities too, but since there is no TensorFLow methods to extract patches in 4-D, the image is preprocessed in small patches using Scikit-Learn extract_patches()
method. I serialize these patches to the TFRecords.
This way, training TFRecords is a lot smaller. I can evaluate the test data using batch prediction.
Thanks for reading and feel free to comment !
QUESTION
I've seen variations of this question all over, but am still struggling to implement it correctly. I have brain MRI images with ground-truth segmented masks with 4 classes (0- background, 1-tissue type1, 2-tissue type2, 3-inexplicably skipped, and 4-tissue type 4...BrATs dataset)
I have a basic U-Net architecture implemented, but am having trouble extending it to non-binary classification. Particularly, the loss function.
This is what I have implemented, but I'm obviously overlooking important details:
...ANSWER
Answered 2019-Jan-23 at 22:09Something I overlooked straight from the documentation tf.nn.sparse_softmax_cross_entropy_with_logits:
labels: Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU.
So changing the labels to shape [-1]
QUESTION
I trained on TensorFlow model on a GPU cluster, saved the model using
...ANSWER
Answered 2018-Jun-23 at 02:10Open up the checkpoint file with your favorite text editor and simply change the absolute paths found therein to just filenames.
QUESTION
I am trying to segment images from the BRATS challenge. I am using U-net in a combination of these two repositories:
https://github.com/zsdonghao/u-net-brain-tumor
https://github.com/jakeret/tf_unet
When I try to output the prediction statistics a mismatch shape error come up:
InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Cast_0_0, Reshape_2/shape)]]
I am using image slices 240x240, with a batch_verification_size = 500
Then,
- this is shape test_x: (500, 240, 240, 1)
- this is shape test_y: (500, 240, 240, 1)
- this is shape test x: (500, 240, 240, 1)
- this is shape test y: (500, 240, 240, 1)
- this is shape batch x: (500, 240, 240, 1)
- this is shape batch y: (500, 240, 240, 1)
- this is shape prediction: (500, 240, 240, 1)
- this is cost : Tensor("add_88:0", shape=(), dtype=float32)
- this is cost : Tensor("Mean_2:0",shape=(), dtype=float32)
- this is shape prediction: (?, ?, ?, 1)
- this is shape batch x: (500, 240, 240, 1)
- this is shape batch y: (500, 240, 240, 1)
240 x 240 x 500 = 28800000 I don't know why is requesting 57600
It looks like the error is emerging from output_minibatch_stats
function:
ANSWER
Answered 2018-May-27 at 18:23You set your batch size as 1 in your tensorflow pipeline during training but feeding in 500 batch size in your testing data. Thats why the network requests only a tensor of shape 57600. You can either set your training batch size 500 or testing batch size as 1.
QUESTION
I'm just starting to use NiftyNet for medical image segmentation. To get going with the software, I was trying to run the demo that segments images from the Brats Challenge dataset (http://www.braintumorsegmentation.org/).
I have downloaded the Brats, data, used rename_crop_brats
on it, and set my $PYTHONPATH. However, when I run the command:
python net_run.py train -c train_whole_tumor_sagittal.ini --app brats_segmentation.BRATSApp --name anisotropic_nets.wt_net.WTNet
I get the following error message:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Provided indices are out-of-bounds w.r.t. dense side with broadcasted shape
I'm not quite sure what I've messed up here, any advice welcomed.
...ANSWER
Answered 2017-Oct-19 at 13:47This error means that you have more discrete labels in your training images than the your network can output. Here, it seems like there are more than 2 labels, while this network is set up to do binary classification.
Could you check which file the 'histogram_ref_file' in the .ini file is pointing to? It should point to the one provided in the [niftynet]/demos/BRATS17 directory, which binarises the tumour mask. This file should have the following text:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install BraTs
You can use BraTs like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page