DCGAN | DCGAN implemetation for custom dataset | Machine Learning library
kandi X-RAY | DCGAN Summary
kandi X-RAY | DCGAN Summary
DCGAN implemetation for custom dataset. Inspired from Udacity deep learning nano degree programme.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of DCGAN
DCGAN Key Features
DCGAN Examples and Code Snippets
Community Discussions
Trending Discussions on DCGAN
QUESTION
I'm trying to make a DCGAN but I keep getting this error when initializing the Convolutional2D layer for my discriminator. It worked fine when I tried it a few days ago but now it's broken.
Here's the build up to the specific layer that is causing problems
...ANSWER
Answered 2021-Jun-10 at 19:24Did you try changing the version? if its still broken please share your logs and full code.
QUESTION
The TensorFlow DCGAN tutorial code for the generator and discriminator models is intended for 28x28 pixel black-and-white images (MNIST dataset).
I would like adapt that model code to work with my own dataset of 280x280 RGB images (280, 280, 3), but it's not clear how to do that.
...ANSWER
Answered 2021-Mar-29 at 06:42You can use the code in the tutorial fine, you just need to adapt the generator a bit. Let me break it for you. Here is the generator code from the tutorial:
QUESTION
So, I'm training a DCGAN model in pytorch on celeba dataset (people). And here is the architecture of the generator:
...ANSWER
Answered 2021-Mar-25 at 01:56You just can't do that. As you said, your network expects 100 dimensional input which is normally sampled from standard normal distribution:
So the generator's job is to take this random vector and generate 3x64x64 image that is indistinguishable from real images. Input is a random 100 dimensional vector sampled from standard normal distribution. I don't see any way to input your image into the current network without modifying the architecture and retraining the new model. If you want to try a new model, you can change input to occluded images, apply some conv. / linear layers to reduce the dimensions to 100 then keep the rest of the network same. This way network will try to learn to generate images not from latent vector but from the feature vector extracted from occluded images. It may or may not work.
EDIT I've decided to give it a go and see if network can learn with this type of conditioned input vectors instead of latent vectors. I've used the tutorial example you've linked and added a couple of changes. First a new network for receiving input and reducing it to 100 dimensions:
QUESTION
I came across this on github (snippet from here):
...ANSWER
Answered 2021-Mar-21 at 13:50If working with data where batch size is the first dimension then you can interchange real_cpu.size(0)
with len(real_cpu)
or with len(data[0])
.
However when working with some models like LSTMs you can have batch size at second dimension, and in such case you couldn't go with len
, but rather real_cpu.size(1)
for example
QUESTION
I'm building a DCGAN, and I am having a problem with the shape of the output, it is not matching the shape of the labels when I try calculating the BCELoss.
To generate the discriminator output, do I have to use convolutions all the way down or can I add a Linear layer at some point to match the shape I want?
I mean, do I have to reduce the shape by adding more convolutional layers or can I add a fully connected one? I thought it should have a fully connected layer, but on every tutorial I checked the discriminator had no fully connected layer.
...ANSWER
Answered 2021-Mar-14 at 03:36The DCGAN described a concrete architecture where Conv layers were used for the downsampling of the feature maps. If you carefully design your Conv layers, you can do without a Linear layer but that does not mean that it will not work when you use a Linear layer to downsample (especially as the very last layer). The DCGAN paper just found out it worked better to use Conv layers instead of Linear to downsample.
If you want to maintain this architecture, you can change the kernel size or padding or stride to give you exactly a single value in the last layer. Refer to the Pytorch documentation on Conv layers to see what the output size should be, given an input size
QUESTION
I'm trying to train a GAN in some images, I followed the tutorial on pytorch's page and got to the following code, but when the crossentropy function is applyed during the training it returns the error below the code:
...ANSWER
Answered 2021-Mar-09 at 09:02Your model's output is not consistent with your criterion.
If you want to keep the model and change the criterion:
Use BCELoss
instead of CrossEntropyLoss
. Note: You will need to cast your labels to float before passing them in. Also consider removing the Sigmoid()
from the model and using BCEWithLogitsLoss
.
If you want to keep the criterion and change the model:
CrossEntropyLoss
expects the shape (..., num_classes)
. So for your 2 class case (real & fake), you will have to predict 2 values for each image in the batch which means you will need to alter the output channels of the last layer in your model. It also expects the raw logits, so you should remove the Sigmoid()
.
QUESTION
I am trying to implement a GAN to generate network traffic .csv dataset (tabular GAN) and my training result continued to show [D loss: nan, acc.: 50%] [G loss: nan]. I figured that this was because my dataset had NaN values after preprocessing, so I used the code "assert not np.any(np.isnan(x))", and I get the error below. I need help...
...ANSWER
Answered 2021-Mar-04 at 14:48I figured it out eventually. Used .dropna(how='any', inplace = True) after droping unwanted columns and it solved the problem. Now my result is generating at 93.57% accuracy.
QUESTION
Despite there are a few questions related to the same error, I coulnd't solve my problem looking at those.
I'm trying to build a GAN for a uni asignment. My code is very similar to the intro example in this tutorial from TF's website.
Below are what I think are the relevant parts of the code (can provide more details if needed eg. how the discriminator model is built). The line that gives me the error is:
...ANSWER
Answered 2021-Feb-28 at 12:04So, I finally found what was causing the issue. It is related with the layers in the discriminator model, which is not even included in the code chunk above as I thought that was not the problem (because when I tested the discriminator as a standalone model, it worked). Here is how it is defined:
QUESTION
I am implementing DCGANs using PyTorch.
It works well in that I can get reasonable quality generated images, however now I want to evaluate the health of the GAN models by using metrics, mainly the ones introduced by this guide https://machinelearningmastery.com/practical-guide-to-gan-failure-modes/
Their implementation uses Keras which SDK lets you define what metrics you want when you compile the model, see https://keras.io/api/models/model/. In this case the accuracy of the discriminator, i.e. percentage of when it successfully identifies an image as real or generated.
With the PyTorch SDK, I can't seem to find a similar feature that would help me easily acquire this metric from my model.
Does Pytorch provide the functionality to be able to define and extract common metrics from a model?
...ANSWER
Answered 2021-Feb-25 at 10:51Pure PyTorch does not provide metrics out of the box, but it is very easy to define those yourself.
Also there is no such thing as "extracting metrics from model". Metrics are metrics, they measure (in this case accuracy of discriminator), they are not inherent to the model.
Binary accuracyIn your case, you are looking for binary accuracy metric. Below code works with either logits
(unnormalized probability outputed by discriminator
, probably last nn.Linear
layer without activation) or probabilities
(last nn.Linear
followed by sigmoid
activation):
QUESTION
I am following a tutorial on DCGAN. Whenever I try to load the CelebA dataset, torchvision uses up all my run-time's memory(12GB) and the runtime crashes. Am looking for ways on how I can load and apply transformations to the dataset without hogging my run-time's resources.
To ReproduceHere is the part of the code that is causing issues.
...ANSWER
Answered 2021-Jan-01 at 10:05Try the following:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install DCGAN
You can use DCGAN like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page