DCGAN-tensorflow | tensorflow implementation of `` Deep Convolutional | Machine Learning library

 by   carpedm20 JavaScript Version: Current License: MIT

kandi X-RAY | DCGAN-tensorflow Summary

kandi X-RAY | DCGAN-tensorflow Summary

DCGAN-tensorflow is a JavaScript library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow, Generative adversarial networks applications. DCGAN-tensorflow has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"

            kandi-support Support

              DCGAN-tensorflow has a medium active ecosystem.
              It has 7092 star(s) with 2684 fork(s). There are 251 watchers for this library.
              It had no major release in the last 6 months.
              There are 180 open issues and 124 have been closed. On average issues are closed in 202 days. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of DCGAN-tensorflow is current.

            kandi-Quality Quality

              DCGAN-tensorflow has 0 bugs and 0 code smells.

            kandi-Security Security

              DCGAN-tensorflow has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              DCGAN-tensorflow code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              DCGAN-tensorflow is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              DCGAN-tensorflow releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.
              DCGAN-tensorflow saves you 962 person hours of effort in developing the same functionality from scratch.
              It has 2190 lines of code, 52 functions and 11 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed DCGAN-tensorflow and discovered the below as its top functions. This is intended to give you an instant insight into DCGAN-tensorflow implemented functionality, and help decide if they suit your requirements.
            • A canvas class
            • Returns the standard deviation of the given channel .
            • Clones an old canvas
            • Show loader .
            • Computes the average of the data
            • expects an assertion
            • Returns the coordinates of an event .
            • convert RGB value to hex string
            • collapse the nav bar above
            • mouseup handler
            Get all kandi verified functions for this library.

            DCGAN-tensorflow Key Features

            No Key Features are available at this moment for DCGAN-tensorflow.

            DCGAN-tensorflow Examples and Code Snippets

            No Code Snippets are available at this moment for DCGAN-tensorflow.

            Community Discussions


            How to interpret the discriminator's loss and the generator's loss in Generative Adversarial Nets?
            Asked 2019-Aug-03 at 22:06

            I am reading people's implementation of DCGAN, especially this one in tensorflow.

            In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow):

            Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training GANs?



            Answered 2017-Nov-09 at 12:37

            Unfortunately, like you've said for GANs the losses are very non-intuitive. Mostly it happens down to the fact that generator and discriminator are competing against each other, hence improvement on the one means the higher loss on the other, until this other learns better on the received loss, which screws up its competitor, etc.

            Now one thing that should happen often enough (depending on your data and initialisation) is that both discriminator and generator losses are converging to some permanent numbers, like this: (it's ok for loss to bounce around a bit - it's just the evidence of the model trying to improve itself)

            This loss convergence would normally signify that the GAN model found some optimum, where it can't improve more, which also should mean that it has learned well enough. (Also note, that the numbers themselves usually aren't very informative.)

            Here are a few side notes, that I hope would be of help:

            • if loss haven't converged very well, it doesn't necessarily mean that the model hasn't learned anything - check the generated examples, sometimes they come out good enough. Alternatively, can try changing learning rate and other parameters.
            • if the model converged well, still check the generated examples - sometimes the generator finds one/few examples that discriminator can't distinguish from the genuine data. The trouble is it always gives out these few, not creating anything new, this is called mode collapse. Usually introducing some diversity to your data helps.
            • as vanilla GANs are rather unstable, I'd suggest to use some version of the DCGAN models, as they contain some features like convolutional layers and batch normalisation, that are supposed to help with the stability of the convergence. (the picture above is a result of the DCGAN rather than vanilla GAN)
            • This is some common sense but still: like with most neural net structures tweaking the model, i.e. changing its parameters or/and architecture to fit your certain needs/data can improve the model or screw it.

            Source https://stackoverflow.com/questions/42690721


            Tensorflow is not reusing variables under scope
            Asked 2019-May-10 at 22:10

            There is a lot of code when subset of a neural network layers is reused. I always used the following code, which can be found, for example, here:



            Answered 2019-May-10 at 22:10

            Answering myself. According to the answer they do share the same weights, but the tensors on the graph are separate:

            Source https://stackoverflow.com/questions/56084788


            DCGANs: discriminator getting too strong too quickly to allow generator to learn
            Asked 2018-Dec-04 at 04:37

            I am trying to use this version of the DCGAN code (implemented in Tensorflow) with some of my data. I run into the problem of the discriminator becoming too strong way too quickly for generator to learn anything.

            Now there are some tricks typically recommended for that problem with GANs:

            • batch normalisation (already there in DCGANs code)

            • giving a head start to generator.

            I did some version of the latter by allowing 10 iterations of generator per 1 of discriminator (not just in the beginning, but throughout the entire training), and that's how it looks:

            Adding more generator iterations in this case helps only by slowing down the inevitable - discriminator growing too strong and suppressing the generator learning.

            Hence I would like to ask for an advice on whether there is another way that could help the problem of a too strong discriminator?



            Answered 2017-Jun-16 at 13:26

            To summarise this topic - the generic advice would be:

            • try playing with model parameters (like learning rates, for instance)
            • try adding more variety to the input data
            • try adjusting the architecture of both generator and discriminator networks.

            However, in my case the issue was the data scaling: I've changed the format of the input data from the initial .jpg to .npy and lost the rescaling on the way. Please note that this DCGAN-tensorflow code rescales the input data to [-1,1] range, and the model is tuned to work with this range.

            Source https://stackoverflow.com/questions/44313306


            Change DCGAN loss function, that is defined in Tensorflow
            Asked 2017-Sep-21 at 13:45

            I would like to add another term to the generator loss function in DCGAN-tensorflow model.py (code lines 127-133). Like this:



            Answered 2017-Sep-21 at 13:45

            It appears Tensorflow has most of the necessary operations that are available in numpy, so here is the tf version of my numpy code above:

            Source https://stackoverflow.com/questions/46326238


            How to increase the size of deconv2d filters for a fixed data size?
            Asked 2017-Jun-09 at 05:25

            I am trying to adjust this DCGAN code to be able to work with 2x80 data samples.

            All generator layers are tf.nn.deconv2d other than h0, which is ReLu. Generator filter sizes per level are currently:



            Answered 2017-Jun-09 at 05:25

            In your ops.py file

            your problem come from the striding size in your deconv filter, modify the header for conv2d and deconv2d to:

            Source https://stackoverflow.com/questions/44311244


            glob(os.path.join()) to work with the .npy data
            Asked 2017-May-28 at 19:12

            I am trying to augment DC-GANS code so that it works with my data. The original code has its data as JPEG, however I would really strongly prefer to have my data in .npy.

            The problem is line 76: self.data = glob(os.path.join("./data", self.dataset_name, self.input_fname_pattern)) won't work with numpy data (it comes back blank, i.e. []).

            Hence I am wondering what's a good replacement for glob(os.path.join()) for numpy files? Or are there any parameters that would make glob compatible with the numpy data?



            Answered 2017-May-28 at 19:12

            In DCGAN.__init__, change input_fname_pattern='*.jpg' to input_fname_pattern='*.npy':

            Source https://stackoverflow.com/questions/44230637


            AttributeError: module 'tensorflow.contrib.slim' has no attribute 'model_analyzer'
            Asked 2017-May-22 at 16:00

            Running the DCGANS-tensorflow tutorial, more precisely lines python download.py mnist celebA and python main.py --dataset celebA --input_height=108 --train --crop, I get the following error:



            Answered 2017-May-22 at 16:00

            I realised what was the problem with this - just needed an updated version of the Tensorflow (had 0.9.0 versus required 0.12.1), and pip install tensorflow --upgrade solved my problem.

            Source https://stackoverflow.com/questions/44115133


            Google Cloud ML and GCS Bucket issues
            Asked 2017-Mar-16 at 03:23

            I'm using open source Tensorflow implementations of research papers, for example DCGAN-tensorflow. Most of the libraries I'm using are configured to train the model locally, but I want to use Google Cloud ML to train the model since I don't have a GPU on my laptop. I'm finding it difficult to change the code to support GCS buckets. At the moment, I'm saving my logs and models to /tmp and then running a 'gsutil' command to copy the directory to gs://my-bucket at the end of training (example here). If I try saving the model directly to gs://my-bucket it never shows up.

            As for training data, one of the tensorflow samples copies data from GCS to /tmp for training (example here), but this only works when the dataset is small. I want to use celebA, and it is too large to copy to /tmp every run. Is there any documentation or guides for how to go about updating code that trains locally to use Google Cloud ML?

            The implementations are running various versions of Tensorflow, mainly .11 and .12



            Answered 2017-Mar-16 at 03:23

            There is currently no definitive guide. The basic idea would be to replace all occurrences of native Python file operations with equivalents in the file_io module, most notably:

            These functions will work locally and on GCS (as well as any registered file system). Note, however, that there are some slight differences in file_io and the standard file operations (e.g., a different set of 'modes' are supported).

            Fortunately, checkpoint and summary writing do work out of the box, just be sure to pass a GCS path to tf.train.Saver.save and tf.summary.FileWriter.

            In the sample you sent, that looks potentially painful. Consider monkey patching the Python functions to map to the TensorFlow equivalents when the program starts to only have to do it once (demonstrated here).

            As a side note, all of the samples on this page show reading files from GCS.

            Source https://stackoverflow.com/questions/42799117

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install DCGAN-tensorflow

            You can download it from GitHub.


            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • HTTPS


          • CLI

            gh repo clone carpedm20/DCGAN-tensorflow

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link