ultrasound-nerve-segmentation | Kaggle Ultrasound Nerve Segmentation competition | Machine Learning library
kandi X-RAY | ultrasound-nerve-segmentation Summary
kandi X-RAY | ultrasound-nerve-segmentation Summary
Kaggle Ultrasound Nerve Segmentation competition [Keras]. #Install (Ubuntu {14,16}, GPU). In ~/.keras/keras.json (it's very important, the project was running on theano backend, and some issues are possible in case of TensorFlow). Place train and test data into '../train' and '../test' folders accordingly. Results will be generatated in "res/" folder. res/unet.hdf5 - best model. Generate predection with a model in res/unet.hdf5. Motivation's explained in my internal pres (slides:
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Test flow
- Generate a random transformation
- Transform an image using elastic transformation
- Generate the next batch of data
- Generate the flow of the flow function
- Performs validation
- Calculate the length of a label
- Resize image
- Load test ids
- Run the test
- Load test data
- Create training images
- Create test images
ultrasound-nerve-segmentation Key Features
ultrasound-nerve-segmentation Examples and Code Snippets
Community Discussions
Trending Discussions on ultrasound-nerve-segmentation
QUESTION
I am trying to train a UNet for image segmentation in keras
using the following custom loss function and metric:
ANSWER
Answered 2019-Aug-10 at 16:26Thanks to the insight by @today, I realized both the images and the masks were being loaded as arrays with values ranging from 0 to 255. So I added a preprocessing function to normalize them, which solved my problem:
QUESTION
I am trying to implement u-net in Keras,but I got this error while training the model(call model.fit()):
ValueError: Error when checking target: expected conv2d_302 to have shape > (None, 1, 128, 640) but got array with shape (360, 1, 128, 128)
And the output of the model.summary() is :
...ANSWER
Answered 2018-Apr-26 at 19:13It is most certain that the author of the original wanted to concatenate on the channels
dimension, not one of the image dimensions.
The tensors in convolutional networks could be in one of the two formats:
QUESTION
I am running into a few problems while migrating an image segmentation code done with Keras+Tensorflow backend into Keras+CNTK backend. The code runs perfectly with a TF backend but crashes with CNTK.
The model was inspired from https://github.com/jocicmarko/ultrasound-nerve-segmentation/blob/master/train.py
Model inputs are defined as inputs = Input((img_width, img_height, num_channels))
, where num_channels = 1
.
The error comes from the line trying to fit the model:
model.fit(X_train, Y_train, epochs=trainingEpochs, verbose=2, shuffle=True, validation_data=(X_val, Y_val), callbacks=cb_list)
Where X_train
, Y_train
, X_val
, Y_val
are all of shape (num_slices, img_width, img_height, num_channels)
The error I keep getting is the following:
Traceback (most recent call last):
File "TrainNetwork_CNTK.py", line 188, in
history = model.fit(X_train, Y_train, epochs=trainingEpochs, verbose=2, shuffle=True, validation_data=(X_val, Y_val), callbacks=cb_list)
File "C:\Users...\site-packages\keras\engine\training.py", line 1430, in fit
initial_epoch=initial_epoch)
File "C:\Users...\site-packages\keras\engine\training.py", line 1079, in _fit_loop
outs = f(ins_batch)
File "C:\Users...\site-packages\keras\backend\cntk_backend.py", line 1664, in call
input_dict, self.trainer_output)
File "C:\Users...\site-packages\cntk\train\trainer.py", line 160, in train_minibatch
output_map, device)
File "C:\Users...\site-packages\cntk\cntk_py.py", line 2769, in train_minibatch
return _cntk_py.Trainer_train_minibatch(self, *args)
RuntimeError: Node 'UserDefinedFunction2738' (UserDefinedV2Function operation): TensorSliceWithMBLayoutFor: FrameRange's dynamic axis is inconsistent with data:
There seems to be very little activity on CNTK issues here in SO, so anything to try to shine some light to this issue would be very helpful!
...ANSWER
Answered 2017-Sep-09 at 04:51The reason is the loss function:
QUESTION
In Keras
, what are the layers(functions) corresponding to tf.nn.conv2d_transpose
in Tensorflow
? I once saw the comment that we can Just use combinations of UpSampling2D and Convolution2D as appropriate
. Is that right?
In the following two examples, they all use this kind of combination.
1) In Building Autoencoders in Keras, author builds decoder as follows.
2) In an u-uet implementation, author builds deconvolution as follows
...ANSWER
Answered 2017-Feb-09 at 20:32The corresponding layers in Keras
are Deconvolution2D layers.
It's worth to mention that you should be really careful with them because they sometimes might behave in unexpected way. I strongly advise you to read this Stack Overflow question (and its answer) before you start to use this layer.
UPDATE:
- Deconvolution is a layer which was add relatively recently - and maybe this is the reason why people advise you to use
Convolution2D * UpSampling2D
. - Because it's relatively new - it may not work correctly in some cases. It also need some experience to use them properly.
- In fact - from a mathematical point of view - every Deconvolution might be presented as a composition of
Convolution2D
andUpSampling2D
- so maybe this is the reason why it was mentioned in texts you provided.
UPDATE 2:
Ok. I think I found an easy explaination why Deconvolution2D
might be presented in a form of a composition of Convolution2D
and UpSampling2D
. We would use a definition that Deconvolution2D
is a gradient of some convolution layer. Let's consider three most common cases:
- The easiest one is a
Convolutional2D
without any pooling. In this case - as it's the linear operation - its gradient is a function itself - soConvolution2D
. - The more tricky one is a gradient of
Convolution2D
withAveragePooling
. So:(AveragePooling2D * Convolution2D)' = AveragePooling2D' * Convolution2D'
. But a gradient ofAveragePooling2D = UpSample2D * constant
- so it's also in this case when the preposition is true. - The most tricky one is one with
MaxPooling2D
. In this case still(MaxPooling2D * Convolution2D)' = MaxPooling2D' * Convolution2D'
ButMaxPooling2D' != UpSample2D
. But in this case one can easily find an easyConvolution2D
which makesMaxPooling2D' = Convolution2D * UpSample2D
(intuitively - a gradient ofMaxPooling2D
is a zero matrix with only one 1 on its diagonal. AsConvolution2D
might express a matrix operation - it may also represent the injection from a identity matrix to aMaxPooling2D
gradient). So:(MaxPooling2D * Convolution2D)' = UpSampling2D * Convolution2D * Convolution2D = UpSampling2D * Convolution2D'
.
The final remark is that all parts of the proof have shown that Deconvolution2D
is a composition of UpSampling2D
and Convolution2D
instead of opposite. One can easily proof that every function of a form a composition of UpSampling2D
and Convolution2D
might be easily presented in a form of a composition of UpSampling2D
and Convolution2D
. So basically - the proof is done :)
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ultrasound-nerve-segmentation
You can use ultrasound-nerve-segmentation like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page