brats | DeepMedic starter code in pytorch | Machine Learning library

 by   thuyen Python Version: Current License: MIT

kandi X-RAY | brats Summary

kandi X-RAY | brats Summary

brats is a Python library typically used in Artificial Intelligence, Machine Learning applications. brats has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. However brats build file is not available. You can download it from GitHub.

src/valid_list.txt needs to be changed to a proper validation set.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              brats has a low active ecosystem.
              It has 26 star(s) with 9 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 2 open issues and 0 have been closed. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of brats is current.

            kandi-Quality Quality

              brats has no bugs reported.

            kandi-Security Security

              brats has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              brats is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              brats releases are not available. You will need to build from source code and install.
              brats has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed brats and discovered the below as its top functions. This is intended to give you an instant insight into brats implemented functionality, and help decide if they suit your requirements.
            • Train the model
            • Update the statistics
            • Evaluate the given model
            • Adjust the learning rate of the optimizer
            • Save checkpoint
            Get all kandi verified functions for this library.

            brats Key Features

            No Key Features are available at this moment for brats.

            brats Examples and Code Snippets

            No Code Snippets are available at this moment for brats.

            Community Discussions

            QUESTION

            Transfer Learning Segmentation Model Perfoming Significantly Worse on Test Data
            Asked 2021-Mar-18 at 21:48

            I am quite new to the field of semantic segmentation and have recently tried to run the code provided on this paper: Transfer Learning for Brain Tumor Segmentation that was made available on GitHub. It is a semantic segmentation task that uses the BraTS2020 dataset, comprising of 4 modalities, T1, T1ce, T2 and FLAIR. The author utilised a transfer learning approach using Resnet34 weights.

            Due to hardware constraints, I had to half the batch size from 24 to 12. However, after training the model, I noticed a significant drop in performance, with the Dice Score (higher is better) of the 3 classes being only around 5-19-11 as opposed to the reported result of 78-87-82 in the paper. The training and validation accuracies however, seem to be performing normally, just that the model does not perform well on test data, I selected the model that was produced before overfitting (validation loss starts increasing but training loss still decreasing) but yielded equally bad results.

            So far I have tried:

            1. Decreasing the learning rate from 1e-3 to 1e-4, yielded similar results
            2. Increased the number of batches fed to the model per training epoch to 200 batches per epoch, to match the number of iterations ran in the paper since I effectively halved the batch size - (100 batches per epoch, batch size of 24)

            I noticed that image augmentations were applied to the training and validation dataset to increase the robustness of the model training. Do these augmentations need to be performed on the test set in order to make predictions? There are no resizing transforms, transforms that are present are Gaussian Blur and Noise, change in brightness intensity, rotations, elastic deformation, and mirroring, all implemented using the example here.

            I'd greatly appreciate help on these questions:

            1. By doubling the number of batches per epoch, it effectively matches the number of iterations performed as in the original paper since the batch size is halved. Is this the correct approach?

            2. Does the test set data need to be augmented similarly to the training data in order to perform predictions? (Note: no resizing transformations were performed)

            ...

            ANSWER

            Answered 2021-Mar-18 at 21:48
            1. Technically, for a smaller batch the number of iterations should be higher for convergence. So, your approach is going to help, but it probably won't give the same performance boost as doubling the batch size.

            1. Usually, we don't use augmentation on test data. But if the transformation applied on training and validation is not applied to the test data, the test performance will be poor, no doubt. You can try test time augmentation though, even though it's not very common for segmentation tasks https://github.com/qubvel/ttach

            Source https://stackoverflow.com/questions/66697928

            QUESTION

            Image data in mha brain tumor file
            Asked 2021-Feb-15 at 21:54

            I have an MHA file and when I write

            ...

            ANSWER

            Answered 2021-Feb-15 at 21:09

            According to the documentation, load(image) "Loads the image and returns a ndarray with the image’s pixel content as well as a header object."

            Further down in medpy.io.load it says that image_data is "The image data as numpy array with order x,y,z,c.".

            Edit: Because I was kind of curious to see what is actually in this file, I put together a quick script (heavily based on the slider demo) to take a look. I'll leave it here just in case it may be useful to someone. (Click on the "Layer" slider to select the z-coordinate to be drawn.)

            Source https://stackoverflow.com/questions/66214255

            QUESTION

            Keras model's validation loss doesn't match the loss function output
            Asked 2020-Jun-29 at 10:49

            I am working on a biomedical image segmentation project. I am using U-net model for the job. The problem is, when i train the model, the validation loss doesn't seem to be practical.

            I used the dice_coef_loss as loss function, as well as, in the metric. The result of the training is the below graph. The graph clearly shows that validation loss is not following my loss function, cause the two graphs are distinguishable. Though, the train loss does follow the train dice_coef_loss values.

            (The first image from the left is training and validation loss, third one is tarining and validation dice_coef_loss as metric)

            The history graph of training

            (Sorry i am not yet eligible to embed an image, please check the link)

            Here is my model ...

            ANSWER

            Answered 2020-Jun-29 at 10:49

            As, ZabirAlNazi suggested it was a problem of the used library. Changing the imports from keras to tensorflow.keras solved that issue.

            Source https://stackoverflow.com/questions/62426388

            QUESTION

            U-net: TypeError: __call__() missing 1 required positional argument: 'inputs'
            Asked 2020-Jan-04 at 13:00

            i am working on segmentation of brain tumor dataset provided by BraTS challenge based on u-net. After defining the model at the stage of training an error occurred.

            The error is:

            ...

            ANSWER

            Answered 2019-Sep-17 at 19:17

            Older versions of Keras require you to use 'inputs' instead of 'input' as the name of the first input parameter for Model. Check if you can update your Keras version (current stable is 2.2.5) or try this line:

            Source https://stackoverflow.com/questions/57972486

            QUESTION

            TensorFlow tf.data.Dataset API for medical imaging
            Asked 2019-Apr-14 at 03:28

            I'm a student in medical imaging. I have to construct a neural network for image segmentation. I have a data set of 285 subjects, each with 4 modalities (T1, T2, T1ce, FLAIR) + their respective segmentation ground truth. Everything is in 3D with resolution of 240x240x155 voxels (this is BraTS data set).

            As we know, I cannot input the whole image on a GPU for memory reasons. I have to preprocess the images and decompose them in 3D overlapping patches (sub-volumes of 40x40x40) which I do with scikit-image view_as_windows and then serialize the windows in a TFRecords file. Since each patch overlaps of 10 voxels in each direction, these sums to 5,292 patches per volume. The problem is, with only 1 modality, I get sizes of 800 GB per TFRecords file. Plus, I have to compute their respective segmentation weight map and store it as patches too. Segmentation is also stored as patches in the same file.

            And I eventually have to include all the other modalities, which would take nothing less than terabytes of storage. I also have to remember I must also sample equivalent number of patches between background and foreground (class balancing).

            So, I guess I have to do all preprocessing steps on-the-fly, just before every training step (while hoping not to slow down training too). I cannot use tf.data.Dataset.from_tensors() since I cannot load everything in RAM. I cannot use tf.data.Dataset.from_tfrecords() since preprocessing the whole thing before takes a lot of storage and I will eventually run out.

            The question is : what's left for me for doing this cleanly with the possibility to reload the model after training for image inference ?

            Thank you very much and feel free to ask for any other details.

            Pierre-Luc

            ...

            ANSWER

            Answered 2019-Apr-14 at 01:14

            Finally, I found a method to solve my problem.

            I first crop a subject's image without applying the actual crop. I only measure the slices I need to crop the volume to only the brain. I then serialize all the data set images into one TFRecord file, each training example being an image modality, original image's shape and the slices (saved as Int64 feature).

            I decode the TFRecords afterward. Each training sample are reshaped to the shape it contains in a feature. I stack all the image modalities into a stack using tf.stack() method. I crop the stack using the previously extracted slices (the crop then applies to all images in the stack). I finally get some random patches using tf.random_crop() method that allows me to randomly crop a 4-D array (heigh, width, depth, channel).

            The only thing I still haven't figured out is data augmentation. Since all this is occurring in Tensors format, I cannot use plain Python and NumPy to rotate, shear, flip a 4-D array. I would need to do it in the tf.Session(), but I would rather like to avoid this and directly input the training handle.

            For the evaluation, I serialize in a TFRecords file only one test subject per file. The test subject contains all modalities too, but since there is no TensorFLow methods to extract patches in 4-D, the image is preprocessed in small patches using Scikit-Learn extract_patches() method. I serialize these patches to the TFRecords.

            This way, training TFRecords is a lot smaller. I can evaluate the test data using batch prediction.

            Thanks for reading and feel free to comment !

            Source https://stackoverflow.com/questions/53662978

            QUESTION

            Multiclass U-Net segmentation in TensorFlow
            Asked 2019-Jan-23 at 22:09

            I've seen variations of this question all over, but am still struggling to implement it correctly. I have brain MRI images with ground-truth segmented masks with 4 classes (0- background, 1-tissue type1, 2-tissue type2, 3-inexplicably skipped, and 4-tissue type 4...BrATs dataset)

            I have a basic U-Net architecture implemented, but am having trouble extending it to non-binary classification. Particularly, the loss function.

            This is what I have implemented, but I'm obviously overlooking important details:

            ...

            ANSWER

            Answered 2019-Jan-23 at 22:09

            Something I overlooked straight from the documentation tf.nn.sparse_softmax_cross_entropy_with_logits:

            labels: Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU.

            So changing the labels to shape [-1]

            Source https://stackoverflow.com/questions/54041135

            QUESTION

            Restore TensorFlow model on different machine
            Asked 2018-Jun-23 at 02:51

            I trained on TensorFlow model on a GPU cluster, saved the model using

            ...

            ANSWER

            Answered 2018-Jun-23 at 02:10

            Open up the checkpoint file with your favorite text editor and simply change the absolute paths found therein to just filenames.

            Source https://stackoverflow.com/questions/50751484

            QUESTION

            tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600
            Asked 2018-May-27 at 18:23

            I am trying to segment images from the BRATS challenge. I am using U-net in a combination of these two repositories:

            https://github.com/zsdonghao/u-net-brain-tumor

            https://github.com/jakeret/tf_unet

            When I try to output the prediction statistics a mismatch shape error come up:

            InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Cast_0_0, Reshape_2/shape)]]

            I am using image slices 240x240, with a batch_verification_size = 500

            Then,

            • this is shape test_x: (500, 240, 240, 1)
            • this is shape test_y: (500, 240, 240, 1)
            • this is shape test x: (500, 240, 240, 1)
            • this is shape test y: (500, 240, 240, 1)
            • this is shape batch x: (500, 240, 240, 1)
            • this is shape batch y: (500, 240, 240, 1)
            • this is shape prediction: (500, 240, 240, 1)
            • this is cost : Tensor("add_88:0", shape=(), dtype=float32)
            • this is cost : Tensor("Mean_2:0",shape=(), dtype=float32)
            • this is shape prediction: (?, ?, ?, 1)
            • this is shape batch x: (500, 240, 240, 1)
            • this is shape batch y: (500, 240, 240, 1)

            240 x 240 x 500 = 28800000 I don't know why is requesting 57600

            It looks like the error is emerging from output_minibatch_stats function:

            ...

            ANSWER

            Answered 2018-May-27 at 18:23

            You set your batch size as 1 in your tensorflow pipeline during training but feeding in 500 batch size in your testing data. Thats why the network requests only a tensor of shape 57600. You can either set your training batch size 500 or testing batch size as 1.

            Source https://stackoverflow.com/questions/50552806

            QUESTION

            NiftyNet: indices are out-of-bounds error
            Asked 2017-Oct-19 at 13:47

            I'm just starting to use NiftyNet for medical image segmentation. To get going with the software, I was trying to run the demo that segments images from the Brats Challenge dataset (http://www.braintumorsegmentation.org/).

            I have downloaded the Brats, data, used rename_crop_brats on it, and set my $PYTHONPATH. However, when I run the command:

            python net_run.py train -c train_whole_tumor_sagittal.ini --app brats_segmentation.BRATSApp --name anisotropic_nets.wt_net.WTNet

            I get the following error message:

            tensorflow.python.framework.errors_impl.InvalidArgumentError: Provided indices are out-of-bounds w.r.t. dense side with broadcasted shape

            I'm not quite sure what I've messed up here, any advice welcomed.

            ...

            ANSWER

            Answered 2017-Oct-19 at 13:47

            This error means that you have more discrete labels in your training images than the your network can output. Here, it seems like there are more than 2 labels, while this network is set up to do binary classification.

            Could you check which file the 'histogram_ref_file' in the .ini file is pointing to? It should point to the one provided in the [niftynet]/demos/BRATS17 directory, which binarises the tumour mask. This file should have the following text:

            Source https://stackoverflow.com/questions/46831160

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install brats

            You can download it from GitHub.
            You can use brats like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/thuyen/brats.git

          • CLI

            gh repo clone thuyen/brats

          • sshUrl

            git@github.com:thuyen/brats.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link