u-net | Convolutional Networks for Biomedical Image Segmentation | Machine Learning library

 by   yihui-he Python Version: Current License: MIT

kandi X-RAY | u-net Summary

kandi X-RAY | u-net Summary

u-net is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Keras applications. u-net has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. However u-net build file is not available. You can download it from GitHub.

U-Net: Convolutional Networks for Biomedical Image Segmentation
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              u-net has a low active ecosystem.
              It has 389 star(s) with 163 fork(s). There are 26 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 9 open issues and 9 have been closed. On average issues are closed in 9 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of u-net is current.

            kandi-Quality Quality

              u-net has 0 bugs and 0 code smells.

            kandi-Security Security

              u-net has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              u-net code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              u-net is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              u-net releases are not available. You will need to build from source code and install.
              u-net has no build file. You will be need to create the build yourself to build the component from source.
              u-net saves you 212 person hours of effort in developing the same functionality from scratch.
              It has 520 lines of code, 33 functions and 4 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed u-net and discovered the below as its top functions. This is intended to give you an instant insight into u-net implemented functionality, and help decide if they suit your requirements.
            • Performs training and pre - trained training
            • Get unet convolutional network
            • Load test data
            • Resize the image
            • Residual residual block
            • Returns a function for convolutionalization
            • Residual block function
            • Create a convolutional block
            • Generate test submission
            • Calculate length encoding
            • Prepare image
            • Bottleneck bottleneck function
            • Create a shortcut to the input and residuals
            • Return a function that creates a convolutional convolution layer
            • Visualize training data
            • Load training and mask
            • Detect the image
            • A basic block function
            • Resize an image
            Get all kandi verified functions for this library.

            u-net Key Features

            No Key Features are available at this moment for u-net.

            u-net Examples and Code Snippets

            No Code Snippets are available at this moment for u-net.

            Community Discussions

            QUESTION

            InvalidArgumentError: required broadcastable shapes at loc(unknown)
            Asked 2021-May-29 at 09:07

            Background

            I am totally new to Python and to machine learning. I just tried to set up a UNet from code I found on the internet and wanted to adapt it to the case I'm working on bit for bit. When trying to .fit the UNet to the training data, I received the following error:

            ...

            ANSWER

            Answered 2021-May-29 at 08:40

            Try to check whether ks.layers.concatenate layers' inputs are of equal dimension. For example ks.layers.concatenate([u7, c3]), here check u7 and c3 tensors are of same shape to be concatenated except the axis input to the function ks.layers.concatenate. Axis = -1 default, that's the last dimension. To illustrate if you are giving ks.layers.concatenate([u7,c3],axis=0), then except the first axis of both u7 and c3 all other axes' dimension should match exactly, example, u7.shape = [3,4,5], c3.shape = [6,4,5].

            Source https://stackoverflow.com/questions/67557515

            QUESTION

            Training data dimensions for semantic segmentation using CNN
            Asked 2021-May-24 at 17:23

            I encountered many hardships when trying to fit a CNN (U-Net) to my tif training images in Python.

            I have the following structure to my data:

            • X
              • 0
                • [Images] (tif, 3-band, 128x128, values ∈ [0, 255])
            • X_val
              • 0
                • [Images] (tif, 3-band, 128x128, values ∈ [0, 255])
            • y
              • 0
                • [Images] (tif, 1-band, 128x128, values ∈ [0, 255])
            • y_val
              • 0
                • [Images] (tif, 1-band, 128x128, values ∈ [0, 255])

            Starting with this data, I defined ImageDataGenerators:

            ...

            ANSWER

            Answered 2021-May-24 at 17:23

            I found the answer to this particular problem. Amongst other issues, "class_mode" has to be set to None for this kind of model. With that set, the second array in both X and y is not written by the ImageDataGenerator. As a result, X and y are interpreted as the data and the mask (which is what we want) in the combined ImageDataGenerator. Otherwise, X_val_gen already produces the tuple shown in the screenshot, where the second entry is interpreted as the class, which would make sense in a classification problem with images spread out in various folders each labeled with a class ID.

            Source https://stackoverflow.com/questions/67644593

            QUESTION

            How to use class weights in Keras for image segmentation
            Asked 2021-May-05 at 15:49

            I am trying to segment medical images using a version of U-Net implemented with Keras. The inputs of my network are 3D images and the outputs are two one-hot-encoded 3D segmentation maps. I know that my dataset is very imbalanced (there is not so much to segment) and therefore I want to use class weights for my loss function (currently binary_crossentropy). With the class weights, I hope the model will give more attention to the small stuff it has to segment.

            If you know the imbalance of your database, you can pass the parameter class_weight to model.fit(). Does this also work with my use case?

            ...

            ANSWER

            Answered 2021-Feb-03 at 15:55

            With the help of the above mentioned github issue I managed to solve the problem for my particular use case. I want to share the solution with you anyway. An extra hurdle was the fact I am using a custom generator for my data. A simplified version of this class is the following code:

            Source https://stackoverflow.com/questions/65881582

            QUESTION

            Evaluate U-Net by layer
            Asked 2021-Apr-22 at 06:54

            I am coming from medical background and a newbie in this machine learning field. I am trying to train my U-Net model using keras and tensorflow for image segmentation. However, my loss value is all NaN and the prediction is all black.

            I would like to check the U-Net layer by layer but I don't know how to feed the data and from where to start. What I meant by checking for each layer is that I want to feed my images to first layer for example and see the output from the first layer and then moving on to the second layer and until to the last layer. Just want to see how the output is produced for each layer and to check from where the nan value is started. Really appreciate for your help.

            These are my codes.

            ...

            ANSWER

            Answered 2021-Apr-20 at 05:24

            To investigate your model layer-by-layer please see example how to show summary of the model and also how to save the model:

            Source https://stackoverflow.com/questions/67172102

            QUESTION

            Dice coefficent not increasing for U-net image segmentation
            Asked 2021-Apr-13 at 17:31
            Problem

            I am using the Image segmentation guide by fchollet to perform semantic segmentation. I have attempted modifying the guide to suit my dataset by labelling the 8-bit img mask values into 1 and 2 like in the Oxford Pets dataset which will be subtracted to 0 and 1 in class Generator(keras.utils.Sequence).The input image is an RGB-image.

            What I tried

            I am not sure why but my dice coefficient isn't increasing at all. I have tried to reduce the learning rate as well as changing the optimizer to SGD/RMSProp, normalizing the data, taking the imbalanced labels into account but the result is very strange. The accuracy/IoU of the model is decreasing as the no. of epochs increases.

            If it helps, I previously asked a question about the metrics that I should be using for an imbalanced dataset here. The visualization of the predictions are okay but the metric is not.

            What I can do next to debug this problem? Is there anything wrong with my code? Will appreciate any advice.

            Here are the results

            ...

            ANSWER

            Answered 2021-Apr-13 at 17:31
            Edit (Solution)

            The model output was wrong. It was supposed to be a sigmoid activation function with 1 output channel. Changing output_layer = Conv2D(nclasses, 3, activation="softmax", padding="same")(output_layer) to output_layer = Conv2D(1, 1, activation="sigmoid", padding="same")(output_layer) solved my problem.

            Also, I decided to use True Positive Rate (TPR) also commonly known as recall/sensitivity/probability of detection as my main metric after reading this post.

            Source https://stackoverflow.com/questions/67018431

            QUESTION

            Full shape received: [256, 256, 3]
            Asked 2021-Mar-18 at 20:45

            I am trying to train a model (U-Net) on RGB images with shape of ( 256, 256, 3) but when I fit the model I get the following error:

            ...

            ANSWER

            Answered 2021-Mar-18 at 11:51

            The model expects the input to be a 4D Tensor but you are passing in a 3D Tensor.

            You would just need to reshape the input before passing it to the model:

            Source https://stackoverflow.com/questions/66682217

            QUESTION

            How to do interactive image binarization using trackbars?
            Asked 2021-Mar-05 at 11:43

            I have a code which gives me binary images using Otsu thresholding. I am making a dataset for a U-Net, and I want to try different algorithms (global as well as local) for the same, so that I can save the "best" image. Below is the code for my image binarization.

            ...

            ANSWER

            Answered 2021-Mar-05 at 11:43

            The code of my solution got longer than expected, but it offers some fancy manipulation possibilities. First of all, let's the see the actual window:

            There are sliders for

            • the morphological operation (dilate, erode, close, open),
            • the structuring element (rectangle, ellipse, cross), and
            • the kernel size (here: limited to the range 1 ... 21).

            The window name reflects the current settings for the first two sliders:

            When pressing s, the image is saved incorporating the current settings:

            Source https://stackoverflow.com/questions/66488070

            QUESTION

            Pytorch model runtime error when testing U-Net
            Asked 2021-Feb-14 at 06:19

            I've defined a U-Net model using Pytorch but it won't accept my input. I've checked the model layers and they seem to be applying the operations as I would expect them to but I still get an error.

            I've just switched to Pytorch after mostly using Keras so I'm not really sure how to debug this issue, the error I get is:

            RuntimeError: Given groups=1, weight of size [32, 64, 3, 3], expected input[1, 128, 65, 65] to have 64 channels, but got 128 channels instead

            Here's the code I'm using:

            ...

            ANSWER

            Answered 2021-Feb-14 at 06:19

            Your problem is in the model layer definition.

            You defined self.upconv2 = self.expand_block(64, 32, 3, 1) but what you do is concatenating 2 tensors each with 64 channels so in total you get 128.

            You should fix the channels of the up-sampling part of the U-Net to match the number of channels after the concatenation.

            Doing the mentioned fix will give you:

            Source https://stackoverflow.com/questions/66144359

            QUESTION

            How to handle odd resolutions in Unet architecture PyTorch
            Asked 2021-Feb-03 at 20:14

            I'm implementing a U-Net based architecture in PyTorch. At train time, I've patches of size 256x256 which doesn't cause any problem. However at test time, I've full HD images (1920x1080). This is causing a problem during skip connections.

            Downsampling 1920x1080 3 times gives 240x135. If I downsample one more time, the resolution becomes 120x68 which when upsampled gives 240x136. Now, I cannot concatenate these two feature maps. How can I solve this?

            PS: I thought this is a fairly common problem, but I didn't get any solution or even mentioning of this problem anywhere on the web. Am I missing something?

            ...

            ANSWER

            Answered 2021-Feb-03 at 15:35

            It is a very common problem in segmentation networks where skip-connections are often involved in the decoding process. Networks usually (depending on the actual architecture) require input size that has side lengths as integer multiples of the largest stride (8, 16, 32, etc.).

            There are two main ways:

            1. Resize input to the nearest feasible size.
            2. Pad the input to the next larger feasible size.

            I prefer (2) because (1) can cause small changes in the pixel level for all the pixels, leading to unnecessary blurriness. Note that we usually need to recover the original shape afterward in both methods.

            My favorite code snippet for this task (symmetric padding for height/width):

            Source https://stackoverflow.com/questions/66028743

            QUESTION

            Using SSIM loss in TensorFlow returns NaN values
            Asked 2020-Dec-15 at 13:04

            I'm training a network with MRI images and I wanted to use SSIM as loss function. Till now I was using MSE, and everything was working fine. But when I tried to use SSIM (tf.image.ssim), I get a bunch of these warining messages:

            ...

            ANSWER

            Answered 2020-Nov-28 at 20:44

            In my experience this warning is typically related to attempting plotting a point with a coordinate at infinity. Of course you should really show us more code for us to help you effectively.

            Source https://stackoverflow.com/questions/64980914

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install u-net

            You can download it from GitHub.
            You can use u-net like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/yihui-he/u-net.git

          • CLI

            gh repo clone yihui-he/u-net

          • sshUrl

            git@github.com:yihui-he/u-net.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link