ims | image manipulation service , written in Go | Computer Vision library

 by   wyattjoh Go Version: v1.4.18 License: MIT

kandi X-RAY | ims Summary

ims is a Go library typically used in Artificial Intelligence, Computer Vision applications. ims has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.
The ims (image manipulation service) is designed to assist with performing image transformations and optimizations on the fly using a full Go solution provided by: The application is also fitted with pprof for performance profiling, refer to for usage information.

                      kandi-support Support

                        ims has a low active ecosystem.
                        It has 15 star(s) with 1 fork(s). There are 1 watchers for this library.
                        There were 3 major release(s) in the last 12 months.
                        ims has no issues reported. There are 1 open pull requests and 0 closed requests.
                        It has a neutral sentiment in the developer community.
                        The latest version of ims is v1.4.18
                        ims Support
                          Best in #Computer Vision
                            Average in #Computer Vision
                            ims Support
                              Best in #Computer Vision
                                Average in #Computer Vision

                                  kandi-Quality Quality

                                    ims has no bugs reported.
                                    ims Quality
                                      Best in #Computer Vision
                                        Average in #Computer Vision
                                        ims Quality
                                          Best in #Computer Vision
                                            Average in #Computer Vision

                                              kandi-Security Security

                                                ims has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                ims Security
                                                  Best in #Computer Vision
                                                    Average in #Computer Vision
                                                    ims Security
                                                      Best in #Computer Vision
                                                        Average in #Computer Vision

                                                          kandi-License License

                                                            ims is licensed under the MIT License. This license is Permissive.
                                                            Permissive licenses have the least restrictions, and you can use them in most projects.
                                                            ims License
                                                              Best in #Computer Vision
                                                                Average in #Computer Vision
                                                                ims License
                                                                  Best in #Computer Vision
                                                                    Average in #Computer Vision

                                                                      kandi-Reuse Reuse

                                                                        ims releases are available to install and integrate.
                                                                        Installation instructions, examples and code snippets are available.
                                                                        ims Reuse
                                                                          Best in #Computer Vision
                                                                            Average in #Computer Vision
                                                                            ims Reuse
                                                                              Best in #Computer Vision
                                                                                Average in #Computer Vision
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi has reviewed ims and discovered the below as its top functions. This is intended to give you an instant insight into ims implemented functionality, and help decide if they suit your requirements.
                                                                                  • Serve starts the HTTP server
                                                                                    • New creates a new providers .
                                                                                      • main is the entry point for testing
                                                                                        • ResizeImage resizes the image with the given width and height .
                                                                                          • Process decodes an image from an input reader .
                                                                                            • ServeAction runs the action handler .
                                                                                              • Image applies an image to the image .
                                                                                                • RotateImage rotates an image .
                                                                                                  • getFilename extracts the filename from the provider .
                                                                                                    • Middleware returns a http . HandlerFunc for the request .
                                                                                                      Get all kandi verified functions for this library.
                                                                                                      Get all kandi verified functions for this library.

                                                                                                      ims Key Features

                                                                                                      image manipulation service, written in Go

                                                                                                      ims Examples and Code Snippets

                                                                                                      Godot imgLines of Code : 37dot imgLicense : Permissive (MIT)
                                                                                                      copy iconCopy
                                                                                                                                          const Crypto = require("crypto"); const querystring = require("querystring"); const transformationOptions = { width: 100, height: 200, }; // Change this to the secret that you gave to ims via the`--signing-secret` // flag. const secret = "keyboard cat"; // Create the sorted query object. let value = Object.keys(transformationOptions) .sort() .reduce((result, key) => { result.push(querystring.stringify({ [key]: transformationOptions[key] })); return result; }, []) .join("&"); // If you've enabled --signing-with-path, you need to include the path component // in your value: // // value = "/my-image.jpg?" + value; // const sig = Crypto.createHmac("sha256", secret).update(value).digest("hex"); console.log(value + "&sig=" + sig);
                                                                                                      # require all requests to have their query parameters signed with a HS256 # signature ims --signing-secret "keyboard cat" # require all requests to have their query parameters and path signed with a # HS256 signature ims --signing-secret "keyboard cat" --signing-with-path
                                                                                                      Godot imgLines of Code : 36dot imgLicense : Permissive (MIT)
                                                                                                      copy iconCopy
                                                                                                                                          NAME: ims - Image Manipulation Server USAGE: ims [global options] command [command options] [arguments...] COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --listen-addr value the address to listen for new connections on (default: "") --backend value comma separated , where  is a pathname or a url (with scheme) to load images from or just  and the host will be the listen address --origin-cache value cache the origin resources based on their cache headers (:memory: for memory based cache, directory name for file based, not specified for disabled) --signing-secret value when provided, will be used to verify signed image requests made to the domain --tracing-uri value when provided, will be used to send tracing information via opentracing --signing-with-path when provided, the path will be included in the value to compute the signature --disable-metrics disable the prometheus metrics --timeout value used to set the cache control max age headers, set to 0 to disable (default: 15m0s) --cors-domain value use to enable CORS for the specified domain (note, this is not required to use as an image service) --debug enable debug logging and pprof routes --json print logs out in JSON --help, -h show help --version, -v print the version
                                                                                                      # will serve images from the $PWD ims # will serve images from the specified folder ims --backend /specific/folder/with/images # when the request is made to the host, will serve images from # the /var/images/alpha directory, when the request is made to the # host, will serve images from the /var/images/beta directory. ims --backend,/var/images/alpha \ --backend,/var/images/beta
                                                                                                      Godot imgLines of Code : 5dot imgLicense : Permissive (MIT)
                                                                                                      copy iconCopy
                                                                                                                                          go get -u
                                                                                                      brew install wyattjoh/stable/ims
                                                                                                      docker pull wyattjoh/ims # or via the Github Container Registry docker pull
                                                                                                      Community Discussions

                                                                                                      Trending Discussions on ims

                                                                                                      How to reformat a corrupt json file with escaped ' and "?
                                                                                                      chevron right
                                                                                                      How to make subplots having different range on each axis have the same figure size using matplotlib?
                                                                                                      chevron right
                                                                                                      How to draw a circle in a square in python?
                                                                                                      chevron right
                                                                                                      How to input user images to predict with Tensorflow?
                                                                                                      chevron right
                                                                                                      DQN Pytorch Loss keeps increasing
                                                                                                      chevron right
                                                                                                      node-pre-gyp: not found --fallback to build error while installing bcrypt
                                                                                                      chevron right
                                                                                                      Matplotlib plot without whitespace in tkinter frame
                                                                                                      chevron right
                                                                                                      bounding boxes on handwritten digits with opencv
                                                                                                      chevron right
                                                                                                      dtype object cannot be converted to float matplotlib
                                                                                                      chevron right
                                                                                                      How to compress folder into an archive file by command line without knowing the full name of the folder?
                                                                                                      chevron right


                                                                                                      How to reformat a corrupt json file with escaped ' and "?
                                                                                                      Asked 2021-Jun-13 at 11:41


                                                                                                      I have a large JSON file (~700.000 lines, 1.2GB filesize) containing twitter data that I need to preprocess for data and network analysis. During the data collection an error happend: Instead of using " as a seperator ' was used. As this does not conform with the JSON standard, the file can not be processed by R or Python.

                                                                                                      Information about the dataset: Every about 500 lines start with meta info + meta information for the users, etc. then there are the tweets in json (order of fields not stable) starting with a space, one tweet per line.

                                                                                                      This is what I tried so far:

                                                                                                      1. A simple data.replace('\'', '\"') is not possible, as the "text" fields contain tweets which may contain ' or " themselves.
                                                                                                      2. Using regex, I was able to catch some of the instances, but it does not catch everything: re.compile(r'"[^"]*"(*SKIP)(*FAIL)|\'')
                                                                                                      3. Using literal.eval(data) from the ast package also throws an error.

                                                                                                      As the order of the fields and the legth for each field is not stable I am stuck on how to reformat that file in order to conform to JSON.

                                                                                                      Normal sample line of the data (for this options one and two would work, but note that the tweets are also in non-english languages, which use " or ' in their tweets):

                                                                                                       {'author_id': '1236888827605725186', 'entities': {'mentions': [{'start': 108, 'end': 124, 'username': 'realDonaldTrump'}], 'hashtags': [{'start': 49, 'end': 55, 'tag': 'QAnon'}, {'start': 56, 'end': 66, 'tag': 'ProudBoys'}]}, 'context_annotations': [{'domain': {'id': '10', 'name': 'Person', 'description': 'Named people in the world like Nelson Mandela'}, 'entity': {'id': '799022225751871488', 'name': 'Donald Trump', 'description': 'US President Donald Trump'}}, {'domain': {'id': '35', 'name': 'Politician', 'description': 'Politicians in the world, like Joe Biden'}, 'entity': {'id': '799022225751871488', 'name': 'Donald Trump', 'description': 'US President Donald Trump'}}], 'text': 'RT @NinjaHodon: Here’s an example of the average #QAnon #ProudBoys crackass trash that’s going to vote for @realDonaldTrump. \n\n https://t.…', 'referenced_tweets': [{'type': 'retweeted', 'id': '1315363137240010753'}], 'conversation_id': '1315441338427506689', 'id': '1315441338427506689', 'lang': 'en', 'public_metrics': {'retweet_count': 20, 'reply_count': 0, 'like_count': 0, 'quote_count': 0}, 'created_at': '20201011T23:57:09.000Z', 'source': 'Twitter for Android', 'possibly_sensitive': False}

                                                                                                      Reformatted sample line which causes an issue:

                                                                                                          {"users": [{"id": "437781219", "username": "HakesJon", "location": `"Wisconsin", "description": "#IndieFictionWriter. Husband. Father. Bearded.\n#BlackLivesMatter #DemilitarizeThePolice #DismantlePolicing", "name": "Jon Hakes", "created_at": "20111215T20:42:41.000Z"}, {"id": "1171947445841997824", "username": "FactNc", "location": "Under Carolina blue skies ", "description": "Defender of truth, justice and the American way.  "I never give them hell. I just tell the truth and they think it\'s hell." Harry S. Truman", "name": "NCFactFinder", "created_at": "20190912T00:44:21.000Z"}, {"id": "315041625", "username": "o0rimbuk0o", "description": "Your desire to put pronouns here is not my issue. Get help.\n\n#resist #notmypresident\n#FBiden", "name": "Sick of it", "created_at": "20110611T06:16:11.000Z"}, {"id": "3141427487", "username": "theGeekSheek", "description": "I don't believe in your God.  Don't tell me he hates me.", "name": "Chic Geek", "created_at": "20150406T18:34:45.000Z"}, {"id": "1084112678", "username": "KarinBorjeesson", "description": "Love to help people & animals in need. Love music. Fucking hate racists. #Anon #OpExposeCPS #BLM #FreePalestine #Yemen #OpSerenaShim #Animalrights #NoDAPL", "name": "AnonyMISSKarin", "created_at": "20130112T20:57:28.000Z"}, {"id": "1003712866011308033", "username": "persian_pesar", "description": "\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200fبه ستواری و سختی رشک پولاد/\nبه راه عشق سرها داده بر باد/\nقرین بیستون هم\u200cسنگ فرهاد/\nز کرمانشاهیان یاد اینچنین باد\n\u200e#Civil_Environment_Engineer", "name": "persianpesar🏳\u200d🌈", "created_at": "20180604T18:59:30.000Z"}, {"id": "814795859644809217", "username": "Aazadist", "description": "\u200f\u200e#Equality🌐\n\u200e#Humanity🌐\nخواهی نشوی همرنگ ، رسوای جماعت شو", "name": "Aazad 🏳️\u200d🌈 آزاد", "created_at": "20161230T11:30:45.000Z"}, {"id": "790375699638915072", "username": "Isaihstewart", "location": "Los Angeles, CA", "description": "Part time assistant manager at “Sheets and Things”", "name": "Dey got the henessey 🗣", "created_at": "20161024T02:13:46.000Z"}, {"id": "4846243708", "username": "williamvercetti", "location": "Virginia Beach, VA", "description": "vma. art. modelo papi. tpain to the dms.", "name": "William Vercetti", "created_at": "20160125T17:21:50.000Z"}, {"id": "1160723882", "username": "k_cawsey", "location": "Halifax, Nova Scotia", "description": "Chaucer, Malory, Arthur Tolkien. @Dal_English", "name": "Dr. Kathy Cawsey", "created_at": "20130208T17:15:30.000Z"}, {"id": "3789298943", "username": "solomonesther17", "location": "Lagos, Nigeria", "description": "FairBib Legal Practitioners", "name": "Esther Solomon", "created_at": "20150927T04:52:29.000Z"}, {"id": "14860380", "username": "Dejify", "location": "San Francisco", "description": "The Nigerian State is a festering boil that the world can't afford to ignore. Because, when it pops, its rancid ooze won't be pleasant nor easy to contain.", "name": "Buhari: Uber Ment (Dèjì Akọ́mọláfẹ́)", "created_at": "20080521T18:57:27.000Z"}, {"id": "1120883223070773248", "username": "Donna780780", "description": "", "name": "Donna Swidley", "created_at": "20190424T02:52:40.000Z"}, {"id": "1253742908487929858", "username": "Neros_sis", "location": "Florida", "description": "", "name": "@Nero's Fiddle  GOP has a terrorism problem", "created_at": "20200424T17:50:00.000Z"}, {"id": "585090491", "username": "vickierae562", "location": "The LBC", "description": "That’s Right, I’m a Lefty 🤣 and I don’t feed trolls! #resist #DumpTrump #DitchMitch #LooseLindsey", "name": "Vickie Rae", "created_at": "20120519T21:00:28.000Z"}, {"id": "1262122532607574022", "username": "EmilySi49944255", "description": "", "name": "Skylar Aubrey", "created_at": "20200517T20:47:34.000Z"}, {"id": "1401663176", "username": "mdeHummelchen", "location": "Tief im Westen", "description": "Pflegewissenschaftlerin,Pflegeberaterin,Dozentin,Lächeln und winken...Pro Pflegekammer", "name": "Madame Hummelchen 💙", "created_at": "20130504T07:44:32.000Z"}, {"id": "2381808114", "username": "mommy97giraffe", "location": "Antifa HQs/Mom Division Office", "description": "Follower of Jesus, Mennonite mom&wife, lover of books, world, peo, poetry&art. 6 autoimmunes&fibro🥄ie Proud Mama Bear of 1gayD & 1pan&autistic son, in 20s🌈💖", "name": "Mennonite Mom(she/her)", "created_at": "20140310T08:51:02.000Z"}, {"id": "2362182011", "username": "rd2glry", "location": "Washington, DC", "description": "", "name": "ateachr", "created_at": "20140224T04:07:21.000Z"}, {"id": "974917494870700032", "username": "GiraffeOld", "location": "Arizona, USA", "description": "", "name": "old man giraffe", "created_at": "20180317T07:56:58.000Z"}, {"id": "830939480", "username": "redz041", "description": "", "name": "Jan Mouzone", "created_at": "20120918T12:18:36.000Z"}, {"id": "3346032292", "username": "kumccaig44", "description": "", "name": "Katrine McCaig", "created_at": "20150625T21:25:21.000Z"}, {"id": "80630279", "username": "LuluTheCalm", "location": "Green Grass & Puddles, Canada", "description": "Mischief in My Eyes & Adventure in My Soul. \nLet's Have a Laugh &, you know, Make the World a Better Place.😎 \nAus/Brit/Cdn🇨🇦", "name": "Lulu 🇨🇦#BeKindBeCalmBeSafe💞 😷 🎏", "created_at": "20091007T17:26:56.000Z"}, {"id": "3252437864", "username": "engelhardterin", "location": "Houston, TX || Lubbock, TX", "description": "24 || Texas Tech || ♀️ || she/her", "name": "Erin Engelhardt", "created_at": "20150622T07:26:28.000Z"}, {"id": "93797267", "username": "mcbeaz", "location": "he/him", "description": "black lives matter.", "name": "mike", "created_at": "20091201T05:28:58.000Z"}, {"id": "2585773107", "username": "michiganington", "location": "Washington, D.C. ", "description": "", "name": "Allyoop", "created_at": "20140606T02:12:33.000Z"}, {"id": "27857135", "username": "JackRayher", "location": "Northport, NY", "description": "Senior Marketing Executive\nLifelong Democrat\n#BidenHarris", "name": "Jack Rayher", "created_at": "20090331T12:12:03.000Z"}, {"id": "1078457644736827392", "username": "RobertCooper58", "description": "Bilingual community advocate. Father of five wonderful kids. Lifelong progressive and proud member of @TheDemCoalition. Early supporter of President @JoeBiden.", "name": "Robert Cooper 🌊", "created_at": "20181228T01:08:34.000Z"}, {"id": "206860139", "username": "MariaArtze", "location": "Münster, Deutschland", "description": "Nas trincheiras da ESO\nEmigrante a medio retornar. Womansplainer.\n(Sie  vostede)\n\nTrans rights are human rights.", "name": "A Malvada Profe mediovacinada", "created_at": "20101023T22:27:26.000Z"}, {"id": "2903906123", "username": "lm1067", "location": "London, England", "description": "B A FINE ARTIST GRADUATED", "name": "Luis Pais", "created_at": "20141203T15:53:10.000Z"}, {"id": "64119853", "username": "IAM_SHAKESPEARE", "location": "Tweeting from the Grave", "description": "This bot has tweeted the complete works of Shakespeare (in order) 5 times over the last 12years. On hiatus for a bit. Created by @strebel", "name": "Willy Shakes", "created_at": "20090809T05:41:08.000Z"}, {"id": "3176623941", "username": "acastellich", "location": "Chicago, Il.", "description": "Abogado,Restaurantero,Immigrant , UVM. AD1 IPADE MBA. Restaurant Hospitality Industry, Chicago IL.", "name": "Alejandro Castelli", "created_at": "20150417T13:23:17.000Z"}, {"id": "782765390925533185", "username": "Diane_L_Espo", "location": "Florida, USA", "description": "", "name": "DianeEspo 🇺🇲🗽", "created_at": "20161003T02:13:07.000Z"}, {"id": "67471020", "username": "thedcma", "location": "Fort Lauderdale, FL", "description": "🖤💎 Style is the only substance I abuse.💎🖤 I’m just a 🌈 Gay 🐔Hillbilly 🔮Warlock 🛵 Riding a 👨🏻\u200d🎤Vaporwave Fever Dream #blacklivesmatter", "name": "Grace Kelly on Steiroids", "created_at": "20090821T00:32:37.000Z"}, {"id": "78797635", "username": "graciosodiablo", "description": "Too much of a good thing can be bad.  So too little of a bad thing must be good. 160 characters or less of me should be perfect.", "name": "gracioso diabloint", "created_at": "20091001T03:59:16.000Z"}, {"id": "268314713", "username": "philppedurand", "location": "Auxerre", "description": "Je suis une personne gentille je milite pour la PMA. je suis militant communiste je suis aussi à l’association des Rosoirs je suis conseillé quartier", "name": "Philippe durand", "created_at": "20110318T14:37:36.000Z"}, {"id": "37996028", "username": "nicrawhide", "location": "Pinconning Michigan ", "description": "Just your average small town gay with big town sensibility!!", "name": "Nicholas Bean", "created_at": "20090505T19:20:37.000Z"}, {"id": "1236656342674407427", "username": "LadyJayPersists", "location": "Valhalla", "description": "USN Veteran | Shieldmaiden | Mom | Not here for a man, I have one | PTSD Warrior | My mind is a beautiful servant to a dangerous master", "name": "Jax", "created_at": "20200308T14:13:48.000Z"}, {"id": "171183306", "username": "dawndawnB", "location": "United States", "description": "Mrs. B, mother of 2 amazing kids, Substance Abuse Counselor, Volunteer, Music Lover. Born in DC but a VA Lo❤️er!", "name": "nwad", "created_at": "20100726T19:21:24.000Z"}, {"id": "817247846751555587", "username": "me2020_2021", "location": "Brisbane, Queensland", "description": "Proud Aussie, living a wonderful life with my wife, Australian Cricket 🏏👏,😷 🍺🥃 \U0001f9ae Alex", "name": "👀🏳️\u200d🌈 "A girl has no Name”', 'created_at': '20170106T05:54:05.000Z'}, {'id': '879459933988585472', 'username': 'Davecl3069', 'location': 'San Francisco Bay Area', 'description': 'proud of my views, life long learner,& hopefully, that guy!\n#LowerTheFlagForCovidVictims #VoteBlue #BLM #SupportThePlayers #LGBTQ #WeNeedToDoBetter #ResistStill', 'name': 'David', 'created_at': '20170626T22:02:42.000Z'}`

                                                                                                      Code used


                                                                                                          def convert_to_json(file):
                                                                                                              with open(file, "r", encoding="utf-8") as f:
                                                                                                                  x =
                                                                                                                  x = x.replace("-", "")
                                                                                                                  rx = re.compile(r'"[^"]*"(*SKIP)(*FAIL)|\'')
                                                                                                                  decoded = rx.sub('"', x)


                                                                                                      def open_json():
                                                                                                          with open("data.json", "r", encoding="utf-8") as f:
                                                                                                              data = literal_eval(f)
                                                                                                              data = json.loads(str(data))

                                                                                                      What I would like to achieve

                                                                                                      1. Reformat the data to conform to JSON (this question) in order to be able to
                                                                                                      2. Build a dataframe with the relevant tweettext, user information and metadata (secondary goal) to be used in further analyses.

                                                                                                      Thanks in advance for any suggestions! :)


                                                                                                      Answered 2021-Jun-07 at 13:57

                                                                                                      if the ' that are causing the problem are only in the tweets and desciption you could try that

                                                                                                      pre_tweet ="'text': '"
                                                                                                      post_tweet = "', 'referenced_tweets':"
                                                                                                      with open(file, encoding="utf-8") as f:
                                                                                                      output = []
                                                                                                      errors = []
                                                                                                      for line in data:
                                                                                                          if pre_tweet in line and post_tweet in line :
                                                                                                              first_part,rest = line.split(pre_tweet)
                                                                                                              tweet,last_part = rest.split(post_tweet)
                                                                                                              pre_tweet = first_part.replace('\'', '\"') + pre_tweet.replace('\'', '\"')
                                                                                                              post_tweet = post_tweet.replace('\'', '\"') + last_part.replace('\'', '\"')
                                                                                                              output.append(pre_tweet + tweet + post_tweet)
                                                                                                          else :

                                                                                                      and if errors is not empty, either it's because there are no tweets in the line (you can change the code a little bit to add it to your output), or what's after the tweet is not 'referenced_tweets'. In the second case, you may try to figure what could the changes be and modify the above code to add multiple post_tweet then you may do the same with the description by changing pre and post tweet by what's usually before and after the description

                                                                                                      The numbers of possible keys after the tweets/description must be finite, so it may take some time to figure out all the possibilities but in the end you should succeed



                                                                                                      How to make subplots having different range on each axis have the same figure size using matplotlib?
                                                                                                      Asked 2021-Jun-09 at 08:25

                                                                                                      I'm plotting an a line plot and an image side by side. Here is my code and the current output:

                                                                                                      import numpy as np
                                                                                                      import pandas as pd
                                                                                                      import matplotlib.pyplot as plt
                                                                                                      #creating some random data
                                                                                                      df = pd.DataFrame(
                                                                                                          {'px_last': 100 + np.random.randn(1000).cumsum()},
                                                                                                          index=pd.date_range('2010-01-01', periods=1000, freq='B'),
                                                                                                      fig, ax = plt.subplots(1,2)
                                                                                                      ax[0].plot(df.index, df['px_last'])
                                                                                                      #reading the image
                                                                                                      url = ''
                                                                                                      im = plt.imread(url)
                                                                                                      implot = ax[1].imshow(im)

                                                                                                      The image is scaled down and the lineplot is enlarged on the y-axis. I want both figures to have the same size vertically (y-axis) - I want to have the vertical height of the line plot reduced to have the same size of the image.

                                                                                                      I tried changing the figure size of the subplots and editing the gridspec parameter as mentioned here but that only changes the width of the image, not the height. Any help is appreciated!


                                                                                                      Answered 2021-Jun-09 at 08:17

                                                                                                      One simple solution is to use automatic aspect on the image.

                                                                                                      implot = ax[1].imshow(im, aspect="auto")

                                                                                                      This will make the shape of your image same as your other plot.



                                                                                                      How to draw a circle in a square in python?
                                                                                                      Asked 2021-Jun-09 at 04:36

                                                                                                      I have the following Canvas class for drawing color in a given pixel:

                                                                                                      class Canvas:
                                                                                                          def __init__(self,color,pixel):
                                                                                                              self.color = color
                                                                                                              self.pixel = pixel
                                                                                                     = np.zeros((pixel,pixel,3))
                                                                                                    [:,:] = color
                                                                                                          def show(self):

                                                                                                      This simple class draws a square with color, for example:

                                                                                                      purple = np.array([0.5, 0.0, 0.5])
                                                                                                      C = Canvas(purple, 2001) # 2001 x 2001 pixels


                                                                                                      I want to create add_disk() function with three argument: centroid,radius,color. So that

                                                                                                      C.add_disk((1001, 1001), 500, white)


                                                                                                      However, I am not sure how to do it using the math formula:

                                                                                                      I think I can use np.meshgrid to color white points, but how do I find those points and display on top of the square?


                                                                                                      Answered 2021-Jun-09 at 04:36

                                                                                                      So with meshgrid, using the shape of the, we first find the coordinates of the X and Y values in the 2D image. Then, we find all values whose coordinates follows the circle rule ((X - Ox) ** 2 + (Y - Oy)**2 <= R**2).

                                                                                                      import matplotlib.pyplot as plt
                                                                                                      import numpy as np
                                                                                                      class Canvas:
                                                                                                          def __init__(self,color,pixel):
                                                                                                              self.color = color
                                                                                                              self.pixel = pixel
                                                                                                     = np.zeros((pixel,pixel,3))
                                                                                                    [:,:] = color
                                                                                                          def show(self):
                                                                                                          def add_disk(self, centroid,radius,color):
                                                                                                              x, y = np.meshgrid(np.arange([0]), np.arange([1]))
                                                                                                              circle_pixels = (x - centroid[0]) ** 2 + (y - centroid[1]) ** 2 <= radius ** 2
                                                                                                    [circle_pixels, ...] = color
                                                                                                      purple = np.array([0.5, 0.0, 0.5])
                                                                                                      C = Canvas(purple, 2001) # 2001 x 2001 pixels
                                                                                                      white = np.array([255, 255, 255])
                                                                                                      C.add_disk((1001, 1001), 500, white)



                                                                                                      How to input user images to predict with Tensorflow?
                                                                                                      Asked 2021-Jun-03 at 03:15

                                                                                                      For my project, I am using tensorflow to predict handwritten user input.

                                                                                                      Basically I used this dataset:, and created a model. I used matplotlib to see the images that were being produced by the pixels.

                                                                                                      My code essentially works with training data, but i want to up it up a little. Through CV2, I created a GUI that allows users to draw a Nepali Letter. After this, I have branching that tells the program to save the image inside the computer.

                                                                                                      This is a snippet of my code for it:

                                                                                                      #creating a forloop to show the image
                                                                                                      while True:
                                                                                                          img=cv2.imshow('window', win) #showing the window
                                                                                                          k= cv2.waitKey(1) 
                                                                                                          if k==ord('c'):
                                                                                                              win= np.zeros((500,500,3), dtype='float64') #creating a new image
                                                                                                          #saving the image as a file to then resize it
                                                                                                          if k==ord('s'):
                                                                                                              cv2.imwrite("nepali_character.jpg", win)
                                                                                                              img= cv2.imread("nepali_character.jpg")
                                                                                                              cv2.imshow('char', img)
                                                                                                              #trying to resize the image using Pillow
                                                                                                              #create a while loop(make the user print stuff until they print something that STOPS it)
                                                                                                              imgout= cv2.imread('resized.jpg')
                                                                                                              cv2.imshow("out", imgout)
                                                                                                              #finding the pixels of the image, will be printed as a matrix
                                                                                                              pix= cv2.imread('resized.jpg', 1)
                                                                                                          if k==ord('q'): #if k is 27 then we break the window

                                                                                                      I resize the image, because those were the dimensions of the data from the dataset.

                                                                                                      Now my question is HOW do I predict what that letter is through tensorflow.

                                                                                                      When I asked my teacher about it, he said to put it in my data file, and then treat it as a training data, and then look at the weights, and pick the greatest weight?

                                                                                                      But I'm confused to go I can put this image into that data file?

                                                                                                      If anyone has any suggestions of how to take user input and then predict, that would be greatly appreciated


                                                                                                      Answered 2021-Jun-03 at 03:15

                                                                                                      Understand the dataset:

                                                                                                      1. the size of the image is 32 x 32
                                                                                                      2. there are 46 different characters/alphabets
                                                                                                      ['character_10_yna', 'character_11_taamatar', 'character_12_thaa', 'character_13_daa', 'character_14_dhaa', 'character_15_adna', 'character_16_tabala', 'character_17_tha', 'character_18_da', 'character_19_dha', 'character_1_ka', 'character_20_na', 'character_21_pa', 
                                                                                                      'character_22_pha', 'character_23_ba', 'character_24_bha', 'character_25_ma',
                                                                                                       'character_26_yaw', 'character_27_ra', 'character_28_la', 'character_29_waw', 'character_2_kha', 'character_30_motosaw', 'character_31_petchiryakha', 'character_32_patalosaw', 'character_33_ha', 'character_34_chhya', 
                                                                                                      'character_35_tra', 'character_36_gya', 'character_3_ga', 'character_4_gha', 'character_5_kna', 'character_6_cha', 'character_7_chha', 'character_8_ja', 
                                                                                                      'character_9_jha', 'digit_0', 'digit_1', 'digit_2', 'digit_3', 'digit_4', 'digit_5', 'digit_6', 'digit_7', 'digit_8', 'digit_9']

                                                                                                      As your images are in categorized in a folder

                                                                                                      so keras implementation will be:

                                                                                                      import matplotlib.pyplot as plt
                                                                                                      import numpy as np
                                                                                                      import os
                                                                                                      import PIL
                                                                                                      import tensorflow as tf
                                                                                                      from tensorflow import keras
                                                                                                      from tensorflow.keras import layers
                                                                                                      from tensorflow.keras.models import Sequential
                                                                                                      import pathlib
                                                                                                      dataDir = "/xx/xx/xx/xx/datasets/Devanagari/drive-download-20210601T224146Z-001/Train"
                                                                                                      data_dir = keras.utils.get_file(dataDir, 'file://'+dataDir)
                                                                                                      data_dir = pathlib.Path(data_dir)
                                                                                                      image_count = len(list(data_dir.glob('*/*.png')))
                                                                                                      batch_size = 32
                                                                                                      img_height = 180 # scale it up for better performance
                                                                                                      img_width = 180 # scale it up for better performance
                                                                                                      train_ds = tf.keras.preprocessing.image_dataset_from_directory(
                                                                                                        image_size=(img_height, img_width),
                                                                                                      val_ds = tf.keras.preprocessing.image_dataset_from_directory(
                                                                                                        image_size=(img_height, img_width),
                                                                                                      class_names = train_ds.class_names
                                                                                                      print(class_names) # 46 classes

                                                                                                      For caching and normalization refer tensorflow tutorial

                                                                                                      AUTOTUNE =
                                                                                                      train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
                                                                                                      val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
                                                                                                      normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
                                                                                                      normalized_ds = x, y: (normalization_layer(x), y))
                                                                                                      image_batch, labels_batch = next(iter(normalized_ds))
                                                                                                      first_image = image_batch[0]
                                                                                                      print(np.min(first_image), np.max(first_image))

                                                                                                      model setup compile and training

                                                                                                      num_classes = 46
                                                                                                      model = Sequential([
                                                                                                        layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
                                                                                                        layers.Conv2D(16, 3, padding='same', activation='relu'),
                                                                                                        layers.Conv2D(32, 3, padding='same', activation='relu'),
                                                                                                        layers.Conv2D(64, 3, padding='same', activation='relu'),
                                                                                                        layers.Dense(128, activation='relu'),
                                                                                                      history =

                                                                                                      this will result in as following( very promising!)

                                                                                                      Epoch 10/10
                                                                                                      1955/1955 [==============================] - 924s 472ms/step - loss: 0.0201 - accuracy: 0.9932 - val_loss: 0.2267 - val_accuracy: 0.9504

                                                                                                      Save the model (this will take time to train, so better save the model)

                                                                                                      !mkdir -p saved_model

                                                                                                      load the model:

                                                                                                      loaded_model = tf.keras.models.load_model('saved_model/my_model')
                                                                                                      # Check its architecture

                                                                                                      now the final task, get the prediction. One way is as following:

                                                                                                      import cv2
                                                                                                      im2=cv2.resize(im2, (180,180)) # resize to 180,180 as that is on which model is trained on
                                                                                                      img2 = tf.expand_dims(im2, 0) # expand the dims means change shape from (180, 180, 3) to (1, 180, 180, 3)
                                                                                                      predictions = loaded_model.predict(img2)
                                                                                                      score = tf.nn.softmax(predictions[0]) # # get softmax for each output
                                                                                                          "This image most likely belongs to {} with a {:.2f} percent confidence."
                                                                                                          .format(class_names[np.argmax(score)], 100 * np.max(score))
                                                                                                      ) # get the np.argmax, means give me the index where probability is max, in this case it got 29. This answers the response 
                                                                                                      # you got from your instructor. that is "greatest weight"
                                                                                                      (180, 180, 3)
                                                                                                      (1, 180, 180, 3)
                                                                                                      This image most likely belongs to character_3_ga with a 100.00 percent confidence.

                                                                                                      another way is through online. the one you are trying to achive. the image shape need to be in (1, 180, 180, 3) for this example or can be (1, 32, 32, 3) if no resize was done. and then feed it to predict. somthing like below

                                                                                                      out = tf.expand_dims(out, 0)
                                                                                                      predictions = loaded_model.predict(out)
                                                                                                      score = tf.nn.softmax(predictions[0]) # # get softmax for each output
                                                                                                          "This image most likely belongs to {} with a {:.2f} percent confidence."
                                                                                                          .format(class_names[np.argmax(score)], 100 * np.max(score))



                                                                                                      DQN Pytorch Loss keeps increasing
                                                                                                      Asked 2021-Jun-02 at 17:39

                                                                                                      I am implementing simple DQN algorithm using pytorch, to solve the CartPole environment from gym. I have been debugging for a while now, and I cant figure out why the model is not learning.


                                                                                                      • using SmoothL1Loss performs worse than MSEloss, but loss increases for both
                                                                                                      • smaller LR in Adam does not work, I have tested using 0.0001, 0.00025, 0.0005 and default


                                                                                                      • I have debugged various parts of the algorithm individually, and can say with good confidence that the issue is in the learn function. I am wondering if this bug is due to me misunderstanding detach in pytorch or some other framework mistake im making.
                                                                                                      • I am trying to stick as close to the original paper as possible (linked above)


                                                                                                      import torch as T
                                                                                                      import torch.nn as nn
                                                                                                      import torch.nn.functional as F
                                                                                                      import gym
                                                                                                      import numpy as np
                                                                                                      class ReplayBuffer:
                                                                                                          def __init__(self, mem_size, input_shape, output_shape):
                                                                                                              self.mem_counter = 0
                                                                                                              self.mem_size = mem_size
                                                                                                              self.input_shape = input_shape
                                                                                                              self.actions = np.zeros(mem_size)
                                                                                                              self.states = np.zeros((mem_size, *input_shape))
                                                                                                              self.states_ = np.zeros((mem_size, *input_shape))
                                                                                                              self.rewards = np.zeros(mem_size)
                                                                                                              self.terminals = np.zeros(mem_size)
                                                                                                          def sample(self, batch_size):
                                                                                                              indices = np.random.choice(self.mem_size, batch_size)
                                                                                                              return self.actions[indices], self.states[indices], \
                                                                                                                  self.states_[indices], self.rewards[indices], \
                                                                                                          def store(self, action, state, state_, reward, terminal):
                                                                                                              index = self.mem_counter % self.mem_size
                                                                                                              self.actions[index] = action
                                                                                                              self.states[index] = state
                                                                                                              self.states_[index] = state_
                                                                                                              self.rewards[index] = reward
                                                                                                              self.terminals[index] = terminal
                                                                                                              self.mem_counter += 1
                                                                                                      class DeepQN(nn.Module):
                                                                                                          def __init__(self, input_shape, output_shape, hidden_layer_dims):
                                                                                                              super(DeepQN, self).__init__()
                                                                                                              self.input_shape = input_shape
                                                                                                              self.output_shape = output_shape
                                                                                                              layers = []
                                                                                                              layers.append(nn.Linear(*input_shape, hidden_layer_dims[0]))
                                                                                                              for index, dim in enumerate(hidden_layer_dims[1:]):
                                                                                                                  layers.append(nn.Linear(hidden_layer_dims[index], dim))
                                                                                                              layers.append(nn.Linear(hidden_layer_dims[-1], *output_shape))
                                                                                                              self.layers = nn.ModuleList(layers)
                                                                                                              self.loss = nn.MSELoss()
                                                                                                              self.optimizer = T.optim.Adam(self.parameters())
                                                                                                          def forward(self, states):
                                                                                                              for layer in self.layers[:-1]:
                                                                                                                  states = F.relu(layer(states))
                                                                                                              return self.layers[-1](states)
                                                                                                          def learn(self, predictions, targets):
                                                                                                              loss = self.loss(input=predictions, target=targets)
                                                                                                              return loss
                                                                                                      class Agent:
                                                                                                          def __init__(self, epsilon, gamma, input_shape, output_shape):
                                                                                                              self.input_shape = input_shape
                                                                                                              self.output_shape = output_shape
                                                                                                              self.epsilon = epsilon
                                                                                                              self.gamma = gamma
                                                                                                              self.q_eval = DeepQN(input_shape, output_shape, [64])
                                                                                                              self.memory = ReplayBuffer(10000, input_shape, output_shape)
                                                                                                              self.batch_size = 32
                                                                                                              self.learn_step = 0
                                                                                                          def move(self, state):
                                                                                                              if np.random.random() < self.epsilon:
                                                                                                                  return np.random.choice(*self.output_shape)
                                                                                                                  state = T.tensor([state]).float()
                                                                                                                  action = self.q_eval(state).max(axis=1)[1]
                                                                                                                  return action.item()
                                                                                                          def sample(self):
                                                                                                              actions, states, states_, rewards, terminals = \
                                                                                                              actions = T.tensor(actions).long()
                                                                                                              states = T.tensor(states).float()
                                                                                                              states_ = T.tensor(states_).float()
                                                                                                              rewards = T.tensor(rewards).view(self.batch_size).float()
                                                                                                              terminals = T.tensor(terminals).view(self.batch_size).long()
                                                                                                              return actions, states, states_, rewards, terminals
                                                                                                          def learn(self, state, action, state_, reward, done):
                                                                                                    , state, state_, reward, done)
                                                                                                              if self.memory.mem_counter < self.batch_size:
                                                                                                              self.learn_step += 1
                                                                                                              actions, states, states_, rewards, terminals = self.sample()
                                                                                                              indices = np.arange(self.batch_size)
                                                                                                              q_eval = self.q_eval(states)[indices, actions]
                                                                                                              q_next = self.q_eval(states_).detach()
                                                                                                              q_target = rewards + self.gamma * q_next.max(axis=1)[0] * (1 - terminals)
                                                                                                              loss = self.q_eval.learn(q_eval, q_target)
                                                                                                              self.epsilon *= 0.9 if self.epsilon > 0.1 else 1.0
                                                                                                              return loss.item()
                                                                                                      def learn(env, agent, episodes=500):
                                                                                                          print('Episode: Mean Reward: Last Loss: Mean Step')
                                                                                                          rewards = []
                                                                                                          losses = [0]
                                                                                                          steps = []
                                                                                                          num_episodes = episodes
                                                                                                          for episode in range(num_episodes):
                                                                                                              done = False
                                                                                                              state = env.reset()
                                                                                                              total_reward = 0
                                                                                                              n_steps = 0
                                                                                                              while not done:
                                                                                                                  action = agent.move(state)
                                                                                                                  state_, reward, done, _ = env.step(action)
                                                                                                                  loss = agent.learn(state, action, state_, reward, done)
                                                                                                                  state = state_
                                                                                                                  total_reward += reward
                                                                                                                  n_steps += 1
                                                                                                                  if loss:
                                                                                                              if episode % (episodes // 10) == 0 and episode != 0:
                                                                                                                  print(f'{episode:5d} : {np.mean(rewards):5.2f} '
                                                                                                                        f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
                                                                                                                  rewards = []
                                                                                                                  losses = [0]
                                                                                                                  steps = []
                                                                                                          print(f'{episode:5d} : {np.mean(rewards):5.2f} '
                                                                                                                f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
                                                                                                          return losses, rewards
                                                                                                      if __name__ == '__main__':
                                                                                                          env = gym.make('CartPole-v1')
                                                                                                          agent = Agent(1.0, 1.0,
                                                                                                          learn(env, agent, 500)


                                                                                                      Answered 2021-Jun-02 at 17:39

                                                                                                      The main problem I think is the discount factor, gamma. You are setting it to 1.0, which mean that you are giving the same weight to the future rewards as the current one. Usually in reinforcement learning we care more about the immediate reward than the future, so gamma should always be less than 1.

                                                                                                      Just to give it a try I set gamma = 0.99 and run your code:

                                                                                                      Episode: Mean Reward: Last Loss: Mean Step
                                                                                                        100 : 34.80 :  0.34: 34.80
                                                                                                        200 : 40.42 :  0.63: 40.42
                                                                                                        300 : 65.58 :  1.78: 65.58
                                                                                                        400 : 212.06 :  9.84: 212.06
                                                                                                        500 : 407.79 : 19.49: 407.79

                                                                                                      As you can see the loss still increases (even if not as much as before), but so does the reward. You should consider that loss here is not a good metric for the performance, because you have a moving target. You can reduce the instability of the target by using a target network. With additional parameter tuning and a target network one could probably make the loss even more stable.

                                                                                                      Also generally note that in reinforcement learning the loss value is not as important as it is in supervised; a decrease in loss does not always imply an improvement in performance, and vice versa.

                                                                                                      The problem is that the Q target is moving while the training steps happen; as the agent plays, predicting the correct sum of rewards gets extremely hard (e.g. more states and rewards explored means higher reward variance), so the loss increases. This is even clearer in more complex the environments (more states, variated rewards, etc).

                                                                                                      At the same time the Q network is getting better at approximating the Q values for each action, so the rewards (could) increase.



                                                                                                      node-pre-gyp: not found --fallback to build error while installing bcrypt
                                                                                                      Asked 2021-May-30 at 11:44

                                                                                                      I am building new node project. while installing bcrypt package i got error given below:

                                                                                                      > bcrypt@5.0.1 install /media/keval/E: Drive/projects/MERN Projects/FMS/node_modules/bcrypt
                                                                                                      > node-pre-gyp install --fallback-to-build
                                                                                                      sh: 1: node-pre-gyp: not found
                                                                                                      npm WARN ims@1.0.0 No repository field.
                                                                                                      npm ERR! code ELIFECYCLE
                                                                                                      npm ERR! syscall spawn
                                                                                                      npm ERR! file sh
                                                                                                      npm ERR! errno ENOENT
                                                                                                      npm ERR! bcrypt@5.0.1 install: `node-pre-gyp install --fallback-to-build`
                                                                                                      npm ERR! spawn ENOENT
                                                                                                      npm ERR! 
                                                                                                      npm ERR! Failed at the bcrypt@5.0.1 install script.
                                                                                                      npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
                                                                                                      npm ERR! A complete log of this run can be found in:
                                                                                                      npm ERR!     /home/keval/.npm/_logs/2021-05-29T09_24_30_366Z-debug.log

                                                                                                      all other packages are installed properly without error but i got error while installing bcrypt only.

                                                                                                      i have installed node-pre-gyp. i found one solution on this problem

                                                                                                      npm uninstall bcrypt --save
                                                                                                      npm install bcrypt@5 --save

                                                                                                      but this is not working at all. what i am doing wrong ?


                                                                                                      Answered 2021-May-30 at 11:44

                                                                                                      Yes i also faced this problem, but don't worry you can install bcryptjs insted of bcrypt. it will same as bcrypt. first of all run this npm unistall bcrypt then npm install bcryptjs. it will work. but make sure you change for import package like this import bcrypt from 'bcryptjs';



                                                                                                      Matplotlib plot without whitespace in tkinter frame
                                                                                                      Asked 2021-May-28 at 18:20

                                                                                                      Im trying to display a greyscale image (made with matplotlib) in a tkinter Frame. The image itself is displayed, but only with some whitespace between each side of the actual image and the tkinter Frame border. Any ideas on how i can avoid drawing the image with the whitespace?

                                                                                                      My code that i currently use:

                                                                                                      from matplotlib.figure import Figure
                                                                                                      from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
                                                                                                      fig = Figure(figsize=(20, 20))
                                                                                                      ax = fig.add_subplot(111)
                                                                                                      ax.imshow(, cmap="Greys", interpolation=None)
                                                                                                      canvas = FigureCanvasTkAgg(fig, master=self.frame)

                                                                                                      The code produces this output in the frame:

                                                                                                      This is roughly how it should look like (with better image quality of course, but you get the idea):


                                                                                                      Answered 2021-May-28 at 17:55

                                                                                                      It looks like the whitespace exists both in the plot and in the GUI, so the problem lies on the matplotlib side. The fig.subplots_adjust(left=0.01, right=0.99, top=0.99, bottom=0.01) method can reduce this whitespace.



                                                                                                      bounding boxes on handwritten digits with opencv
                                                                                                      Asked 2021-May-25 at 23:48

                                                                                                      I tried the code provided bellow to segment each digit in this image and put a contour around it then crop it out but it's giving me bad results, I'm not sure what I need to change or work on.

                                                                                                      The best idea I can think of right now is filtering the 4 largest contours in the image except the image contour itself.

                                                                                                      The code I'm working with:

                                                                                                      import sys
                                                                                                      import numpy as np
                                                                                                      import cv2
                                                                                                      im = cv2.imread('marks/mark28.png')
                                                                                                      im3 = im.copy()
                                                                                                      gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
                                                                                                      blur = cv2.GaussianBlur(gray, (5, 5), 0)
                                                                                                      thresh = cv2.adaptiveThreshold(blur, 255, 1, 1, 11, 2)
                                                                                                      #################      Now finding Contours         ###################
                                                                                                      contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
                                                                                                      samples = np.empty((0, 100))
                                                                                                      responses = []
                                                                                                      keys = [i for i in range(48, 58)]
                                                                                                      for cnt in contours:
                                                                                                          if cv2.contourArea(cnt) > 50:
                                                                                                              [x, y, w, h] = cv2.boundingRect(cnt)
                                                                                                              if h > 28:
                                                                                                                  cv2.rectangle(im, (x, y), (x + w, y + h), (0, 0, 255), 2)
                                                                                                                  roi = thresh[y:y + h, x:x + w]
                                                                                                                  roismall = cv2.resize(roi, (10, 10))
                                                                                                                  cv2.imshow('norm', im)
                                                                                                                  key = cv2.waitKey(0)
                                                                                                                  if key == 27:  # (escape to quit)
                                                                                                                  elif key in keys:
                                                                                                                      sample = roismall.reshape((1, 100))
                                                                                                                      samples = np.append(samples, sample, 0)
                                                                                                          responses = np.array(responses, np.float32)
                                                                                                          responses = responses.reshape((responses.size, 1))
                                                                                                          "training complete"
                                                                                                          np.savetxt('', samples)
                                                                                                          np.savetxt('', responses)

                                                                                                      I need to change the if condition on height probably but more importantly I need if conditions to get the 4 largest contours on the image. Sadly, I haven't managed to find what I'm supposed to be filtering.

                                                                                                      This is the kind of results I get, I'm trying to escape getting those inner contours on the digit "zero"

                                                                                                      Unprocessed images as requested: example 1 example 2

                                                                                                      All I need is an idea on what I should filter for, don't write code please. Thank you community.


                                                                                                      Answered 2021-May-25 at 23:48

                                                                                                      You almost have it. You have multiple bounding rectangles on each digit because you are retrieving every contour (external and internal). You are using cv2.findContours in RETR_LIST mode, which retrieves all the contours, but doesn't create any parent-child relationship. The parent-child relationship is what discriminates between inner (child) and outter (parent) contours, OpenCV calls this "Contour Hierarchy". Check out the docs for an overview of all hierarchy modes. Of particular interest is RETR_EXTERNAL mode. This mode fetches only external contours - so you don't get multiple contours and (by extension) multiple bounding boxes for each digit!

                                                                                                      Also, it seems that your images have a red border. This will introduce noise while thresholding the image, and this border might be recognized as the top-level outer contour - thus, every other contour (the children of this parent contour) will not be fetched in RETR_EXTERNAL mode. Fortunately, the border position seems constant and we can eliminate it with a simple flood-fill, which pretty much fills a blob of a target color with a substitute color.

                                                                                                      Let's check out the reworked code:

                                                                                                      # Imports:
                                                                                                      import cv2
                                                                                                      import numpy as np
                                                                                                      # Set image path
                                                                                                      path = "D://opencvImages//"
                                                                                                      fileName = "rhWM3.png"
                                                                                                      # Read Input image
                                                                                                      inputImage = cv2.imread(path+fileName)
                                                                                                      # Deep copy for results:
                                                                                                      inputImageCopy = inputImage.copy()
                                                                                                      # Convert BGR to grayscale:
                                                                                                      grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
                                                                                                      # Threshold via Otsu:
                                                                                                      threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)

                                                                                                      The first step is to get the binary image with all the target blobs/contours. This is the result so far:

                                                                                                      Notice the border is white. We have to delete this, a simple flood-filling at position (x=0,y=0) witch black color will suffice:

                                                                                                      # Flood-fill border, seed at (0,0) and use black (0) color:
                                                                                                      cv2.floodFill(binaryImage, None, (0, 0), 0)

                                                                                                      This is the filled image, no more border!

                                                                                                      Now we can retrieve the external, outermost contours in RETR_EXTERNAL mode:

                                                                                                      # Get each bounding box
                                                                                                      # Find the big contours/blobs on the filtered image:
                                                                                                      contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

                                                                                                      Notice you also get each contour's hierarchy as second return value. This is useful if you want to check out if the current contour is a parent or a child. Alright, let's loop through the contours and get their bounding boxes. If you want to ignore contours below a minimum area threshold, you can also implement an area filter:

                                                                                                      # Look for the outer bounding boxes (no children):
                                                                                                      for _, c in enumerate(contours):
                                                                                                          # Get the bounding rectangle of the current contour:
                                                                                                          boundRect = cv2.boundingRect(c)
                                                                                                          # Get the bounding rectangle data:
                                                                                                          rectX = boundRect[0]
                                                                                                          rectY = boundRect[1]
                                                                                                          rectWidth = boundRect[2]
                                                                                                          rectHeight = boundRect[3]
                                                                                                          # Estimate the bounding rect area:
                                                                                                          rectArea = rectWidth * rectHeight
                                                                                                          # Set a min area threshold
                                                                                                          minArea = 10
                                                                                                          # Filter blobs by area:
                                                                                                          if rectArea > minArea:
                                                                                                              # Draw bounding box:
                                                                                                              color = (0, 255, 0)
                                                                                                              cv2.rectangle(inputImageCopy, (int(rectX), int(rectY)),
                                                                                                                            (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2)
                                                                                                              cv2.imshow("Bounding Boxes", inputImageCopy)
                                                                                                              # Crop bounding box:
                                                                                                              currentCrop = inputImage[rectY:rectY+rectHeight,rectX:rectX+rectWidth]
                                                                                                              cv2.imshow("Current Crop", currentCrop)

                                                                                                      The last three lines of the above snippet crop and show the current digit. This is the result of detected bounding boxes for both of your images (the bounding boxes are colored in green, the red border is part of the input images):



                                                                                                      dtype object cannot be converted to float matplotlib
                                                                                                      Asked 2021-May-24 at 18:30

                                                                                                      I have been trying to run the following code

                                                                                                      import cv2
                                                                                                      import numpy as np
                                                                                                      from matplotlib import pyplot as plt
                                                                                                      img = cv2.imread("G:\myfiles\frames\frame1.jpg",0)
                                                                                                      image = [img]
                                                                                                      for i in range(1):
                                                                                                          plt.subplot(1, 1, i+1), plt.imshow(image[i], 'gray')

                                                                                                      I am getting the following error

                                                                                                          TypeError                                 Traceback (most recent call last)
                                                                                                            1 image = [img]
                                                                                                            2 for i in range(1):
                                                                                                      ----> 3     plt.subplot(1, 1, i+1), plt.imshow(image[i], 'gray')
                                                                                                            4     plt.xticks([]),plt.yticks([])
                                                                                                      E:\anaconda\programme files\lib\site-packages\matplotlib\ in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, filternorm, filterrad, resample, url, data, **kwargs)
                                                                                                         2728         filternorm=filternorm, filterrad=filterrad, resample=resample,
                                                                                                         2729         url=url, **({"data": data} if data is not None else {}),
                                                                                                      -> 2730         **kwargs)
                                                                                                         2731     sci(__ret)
                                                                                                         2732     return __ret
                                                                                                      E:\anaconda\programme files\lib\site-packages\matplotlib\ in inner(ax, data, *args, **kwargs)
                                                                                                         1445     def inner(ax, *args, data=None, **kwargs):
                                                                                                         1446         if data is None:
                                                                                                      -> 1447             return func(ax, *map(sanitize_sequence, args), **kwargs)
                                                                                                         1449         bound = new_sig.bind(ax, *args, **kwargs)
                                                                                                      E:\anaconda\programme files\lib\site-packages\matplotlib\axes\ in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, filternorm, filterrad, resample, url, **kwargs)
                                                                                                         5521                               resample=resample, **kwargs)
                                                                                                      -> 5523         im.set_data(X)
                                                                                                         5524         im.set_alpha(alpha)
                                                                                                         5525         if im.get_clip_path() is None:
                                                                                                      E:\anaconda\programme files\lib\site-packages\matplotlib\ in set_data(self, A)
                                                                                                          701                 not np.can_cast(self._A.dtype, float, "same_kind")):
                                                                                                          702             raise TypeError("Image data of dtype {} cannot be converted to "
                                                                                                      --> 703                             "float".format(self._A.dtype))
                                                                                                          705         if self._A.ndim == 3 and self._A.shape[-1] == 1:
                                                                                                      TypeError: Image data of dtype object cannot be converted to float

                                                                                                      This is the specific error, seems like matplotlib causes the error

                                                                                                      TypeError: Image data of dtype object cannot be converted to float

                                                                                                      Why this error arises what is the possible fix for this?


                                                                                                      Answered 2021-May-24 at 18:30

                                                                                                      You'll need to escape the backslashes in your path, as backslashes are special characters used to escape special characters.

                                                                                                      You can try:

                                                                                                      img = cv2.imread("G:\\myfiles\\frames\\frame1.jpg", 0)

                                                                                                      Or you can use an r-string:

                                                                                                      img = cv2.imread(r"G:\myfiles\frames\frame1.jpg", 0)

                                                                                                      Lastly, you can try using forward slashes instead of backslashes:

                                                                                                      img = cv2.imread("G:/myfiles/frames/frame1.jpg", 0)



                                                                                                      How to compress folder into an archive file by command line without knowing the full name of the folder?
                                                                                                      Asked 2021-May-24 at 17:22

                                                                                                      I have to compress some folders every month that always start with the number of the referenced month followed by a -.

                                                                                                      For example:

                                                                                                      April: folder is 04- ??????
                                                                                                      May: folder is 05- ???????

                                                                                                      I just know the first part of the folder name. The rest of the folder name is always different.

                                                                                                      I´m stuck here:

                                                                                                      @echo off
                                                                                                      for /f "delims=" %%G In ('PowerShell -Command "&{((Get-Date).AddMonths(-1)).ToString('yyyy')}"') do set "ano=%%G" 
                                                                                                      for /f "delims=" %%A In ('PowerShell -Command "&{((Get-Date).AddMonths(-1)).ToString('MM-')}"') do set "mes=%%A" 
                                                                                                      set "winrar=C:\Program Files\winrar"
                                                                                                      "%winrar%\rar.exe" a -ibck -ep1  "C:\FOLDER 1\FOLDER 2\FOLDER 3\%ano%\????????.rar"

                                                                                                      I just have the information about the first name part of the folder like 04-.

                                                                                                      How can I specify Rar.exe to compress the folder by only the first folder name?


                                                                                                      Answered 2021-May-24 at 17:22

                                                                                                      I recommend to read the answers on Time is set incorrectly after midnight for understanding the first FOR command line of the batch code below to get current year and month without using PowerShell:

                                                                                                      @echo off
                                                                                                      setlocal EnableExtensions DisableDelayedExpansion
                                                                                                      pushd "%~dp0"
                                                                                                      for /F "tokens=1,2 delims=/" %%I in ('%SystemRoot%\System32\robocopy.exe "%SystemDrive%\|" . /NJH') do set "Year=%%I" & set "Month=%%J" & goto CheckFolder
                                                                                                      for /D %%I in (%Month%-*) do goto CompressFolders
                                                                                                      echo INFO: There is no non-hidden folder with name: %Month%-*
                                                                                                      goto EndBatch
                                                                                                      set "ArchiveFolder=C:\FOLDER 1\FOLDER 2\FOLDER 3\%Year%"
                                                                                                      md "%ArchiveFolder%" 2>nul
                                                                                                      if not exist "%ArchiveFolder%\" echo ERROR: Failed to create folder: "%ArchiveFolder%"& goto EndBatch
                                                                                                      "C:\Program Files\WinRAR\Rar.exe" a -cfg- -ep1 -idq -m5 -r -y "%ArchiveFolder%\%Month%.rar" "%Month%-*\*"

                                                                                                      That batch file compresses all folders in directory of the batch file with name starting with current month and a hyphen into a RAR archive file with current month as archive file name. So if the batch file directory contains, for example, the folders 05-Folder and 05-OtherFolder, the RAR archive file 05.rar contains these two folders with all its files and subfolders.

                                                                                                      It is of course also possible to compress each folder with name starting with current month and a hyphen into a separate RAR archive file by using the following code:

                                                                                                      @echo off
                                                                                                      setlocal EnableExtensions DisableDelayedExpansion
                                                                                                      pushd "%~dp0"
                                                                                                      for /F "tokens=1,2 delims=/" %%I in ('%SystemRoot%\System32\robocopy.exe "%SystemDrive%\|" . /NJH') do set "Year=%%I" & set "Month=%%J" & goto CheckFolder
                                                                                                      for /D %%I in (%Month%-*) do goto CompressFolders
                                                                                                      echo INFO: There is no non-hidden folder with name: %Month%-*
                                                                                                      goto EndBatch
                                                                                                      set "ArchiveFolder=C:\FOLDER 1\FOLDER 2\FOLDER 3\%Year%"
                                                                                                      md "%ArchiveFolder%" 2>nul
                                                                                                      if not exist "%ArchiveFolder%\" echo ERROR: Failed to create folder: "%ArchiveFolder%"& goto EndBatch
                                                                                                      for /D %%I in (%Month%-*) do "C:\Program Files\WinRAR\Rar.exe" a -cfg- -ep1 -idq -m5 -r -y "%ArchiveFolder%\%%I.rar" "%%I\"

                                                                                                      That batch file creates the RAR archive files 05-Folder.rar and 05-OtherFolder.rar with the folder names 05-Folder and 05-OtherFolder not included in the appropriate RAR archive file because of the backslash in "%%I\". The folder names 05-Folder and 05-OtherFolder would be included in the archive files on using just "%%I".

                                                                                                      Please double click on file C:\Program Files\WinRAR\Rar.txt to open this text file and read it from top to bottom. It is the manual of console version Rar.exe. The switch -ibck is not in this manual because that is an option of the GUI version WinRAR.exe to run in background which means minimized to system tray. The Rar.exe command line switch -idq is used instead to create the archive files in quiet mode showing only errors.

                                                                                                      For understanding the used commands and how they work, open a command prompt window, execute there the following commands, and read entirely all help pages displayed for each command very carefully.

                                                                                                      • echo /?
                                                                                                      • endlocal /?
                                                                                                      • for /?
                                                                                                      • goto /?
                                                                                                      • if /?
                                                                                                      • md /?
                                                                                                      • popd /?
                                                                                                      • pushd /?
                                                                                                      • robocopy /?
                                                                                                      • set /?
                                                                                                      • setlocal /?

                                                                                                      See also single line with multiple commands using Windows batch file for an explanation of operator & used multiple times in the two batch files above.


                                                                                                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network


                                                                                                      No vulnerabilities reported

                                                                                                      Install ims

                                                                                                      You can use the standard Go utility to get the binary and compile it yourself:. Or by visiting the Releases page to download a pre-compiled binary for your arch/os.


                                                                                                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
                                                                                                      Find more information at:
                                                                                                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                                      Find more libraries
                                                                                                      Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                                      Save this library and start creating your kit
                                                                                                    • HTTPS


                                                                                                    • CLI

                                                                                                      gh repo clone wyattjoh/ims

                                                                                                    • sshUrl


                                                                                                    • Share this Page

                                                                                                      share link

                                                                                                      Consider Popular Computer Vision Libraries


                                                                                                      by opencv


                                                                                                      by tesseract-ocr


                                                                                                      by naptha


                                                                                                      by facebookresearch

                                                                                                      Try Top Libraries by wyattjoh


                                                                                                      by wyattjohTypeScript


                                                                                                      by wyattjohGo


                                                                                                      by wyattjohGo


                                                                                                      by wyattjohPHP


                                                                                                      by wyattjohJavaScript

                                                                                                      Compare Computer Vision Libraries with Highest Support


                                                                                                      by opencv


                                                                                                      by square


                                                                                                      by thumbor


                                                                                                      by albumentations-team


                                                                                                      by pytorch

                                                                                                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                                      Find more libraries
                                                                                                      Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                                      Save this library and start creating your kit