ims | image manipulation service , written in Go | Computer Vision library
kandi X-RAY | ims Summary
Support
Quality
Security
License
Reuse
- Serve starts the HTTP server
- New creates a new providers .
- main is the entry point for testing
- ResizeImage resizes the image with the given width and height .
- Process decodes an image from an input reader .
- ServeAction runs the action handler .
- Image applies an image to the image .
- RotateImage rotates an image .
- getFilename extracts the filename from the provider .
- Middleware returns a http . HandlerFunc for the request .
ims Key Features
ims Examples and Code Snippets
const Crypto = require("crypto"); const querystring = require("querystring"); const transformationOptions = { width: 100, height: 200, }; // Change this to the secret that you gave to ims via the`--signing-secret` // flag. const secret = "keyboard cat"; // Create the sorted query object. let value = Object.keys(transformationOptions) .sort() .reduce((result, key) => { result.push(querystring.stringify({ [key]: transformationOptions[key] })); return result; }, []) .join("&"); // If you've enabled --signing-with-path, you need to include the path component // in your value: // // value = "/my-image.jpg?" + value; // const sig = Crypto.createHmac("sha256", secret).update(value).digest("hex"); console.log(value + "&sig=" + sig);
# require all requests to have their query parameters signed with a HS256 # signature ims --signing-secret "keyboard cat" # require all requests to have their query parameters and path signed with a # HS256 signature ims --signing-secret "keyboard cat" --signing-with-path
NAME: ims - Image Manipulation Server USAGE: ims [global options] command [command options] [arguments...] COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --listen-addr value the address to listen for new connections on (default: "127.0.0.1:8080") --backend value comma separated , where is a pathname or a url (with scheme) to load images from or just and the host will be the listen address --origin-cache value cache the origin resources based on their cache headers (:memory: for memory based cache, directory name for file based, not specified for disabled) --signing-secret value when provided, will be used to verify signed image requests made to the domain --tracing-uri value when provided, will be used to send tracing information via opentracing --signing-with-path when provided, the path will be included in the value to compute the signature --disable-metrics disable the prometheus metrics --timeout value used to set the cache control max age headers, set to 0 to disable (default: 15m0s) --cors-domain value use to enable CORS for the specified domain (note, this is not required to use as an image service) --debug enable debug logging and pprof routes --json print logs out in JSON --help, -h show help --version, -v print the version
# will serve images from the $PWD ims # will serve images from the specified folder ims --backend /specific/folder/with/images # when the request is made to the alpha.example.com host, will serve images from # the /var/images/alpha directory, when the request is made to the # beta.example.com host, will serve images from the /var/images/beta directory. ims --backend alpha.example.com,/var/images/alpha \ --backend beta.example.com,/var/images/beta
go get -u github.com/wyattjoh/ims/...
brew install wyattjoh/stable/ims
docker pull wyattjoh/ims # or via the Github Container Registry docker pull ghcr.io/wyattjoh/ims
Trending Discussions on ims
Trending Discussions on ims
QUESTION
Problem
I have a large JSON file (~700.000 lines, 1.2GB filesize) containing twitter data that I need to preprocess for data and network analysis. During the data collection an error happend: Instead of using " as a seperator ' was used. As this does not conform with the JSON standard, the file can not be processed by R or Python.
Information about the dataset: Every about 500 lines start with meta info + meta information for the users, etc. then there are the tweets in json (order of fields not stable) starting with a space, one tweet per line.
This is what I tried so far:
- A simple
data.replace('\'', '\"')
is not possible, as the "text" fields contain tweets which may contain ' or " themselves. - Using regex, I was able to catch some of the instances, but it does not catch everything:
re.compile(r'"[^"]*"(*SKIP)(*FAIL)|\'')
- Using
literal.eval(data)
from theast
package also throws an error.
As the order of the fields and the legth for each field is not stable I am stuck on how to reformat that file in order to conform to JSON.
Normal sample line of the data (for this options one and two would work, but note that the tweets are also in non-english languages, which use " or ' in their tweets):
{'author_id': '1236888827605725186', 'entities': {'mentions': [{'start': 108, 'end': 124, 'username': 'realDonaldTrump'}], 'hashtags': [{'start': 49, 'end': 55, 'tag': 'QAnon'}, {'start': 56, 'end': 66, 'tag': 'ProudBoys'}]}, 'context_annotations': [{'domain': {'id': '10', 'name': 'Person', 'description': 'Named people in the world like Nelson Mandela'}, 'entity': {'id': '799022225751871488', 'name': 'Donald Trump', 'description': 'US President Donald Trump'}}, {'domain': {'id': '35', 'name': 'Politician', 'description': 'Politicians in the world, like Joe Biden'}, 'entity': {'id': '799022225751871488', 'name': 'Donald Trump', 'description': 'US President Donald Trump'}}], 'text': 'RT @NinjaHodon: Here’s an example of the average #QAnon #ProudBoys crackass trash that’s going to vote for @realDonaldTrump. \n\n https://t.…', 'referenced_tweets': [{'type': 'retweeted', 'id': '1315363137240010753'}], 'conversation_id': '1315441338427506689', 'id': '1315441338427506689', 'lang': 'en', 'public_metrics': {'retweet_count': 20, 'reply_count': 0, 'like_count': 0, 'quote_count': 0}, 'created_at': '20201011T23:57:09.000Z', 'source': 'Twitter for Android', 'possibly_sensitive': False}
Reformatted sample line which causes an issue:
{"users": [{"id": "437781219", "username": "HakesJon", "location": `"Wisconsin", "description": "#IndieFictionWriter. Husband. Father. Bearded.\n#BlackLivesMatter #DemilitarizeThePolice #DismantlePolicing", "name": "Jon Hakes", "created_at": "20111215T20:42:41.000Z"}, {"id": "1171947445841997824", "username": "FactNc", "location": "Under Carolina blue skies ", "description": "Defender of truth, justice and the American way. "I never give them hell. I just tell the truth and they think it\'s hell." Harry S. Truman", "name": "NCFactFinder", "created_at": "20190912T00:44:21.000Z"}, {"id": "315041625", "username": "o0rimbuk0o", "description": "Your desire to put pronouns here is not my issue. Get help.\n\n#resist #notmypresident\n#FBiden", "name": "Sick of it", "created_at": "20110611T06:16:11.000Z"}, {"id": "3141427487", "username": "theGeekSheek", "description": "I don't believe in your God. Don't tell me he hates me.", "name": "Chic Geek", "created_at": "20150406T18:34:45.000Z"}, {"id": "1084112678", "username": "KarinBorjeesson", "description": "Love to help people & animals in need. Love music. Fucking hate racists. #Anon #OpExposeCPS #BLM #FreePalestine #Yemen #OpSerenaShim #Animalrights #NoDAPL", "name": "AnonyMISSKarin", "created_at": "20130112T20:57:28.000Z"}, {"id": "1003712866011308033", "username": "persian_pesar", "description": "\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200f\u200fبه ستواری و سختی رشک پولاد/\nبه راه عشق سرها داده بر باد/\nقرین بیستون هم\u200cسنگ فرهاد/\nز کرمانشاهیان یاد اینچنین باد\n\u200e#Civil_Environment_Engineer", "name": "persianpesar🏳\u200d🌈", "created_at": "20180604T18:59:30.000Z"}, {"id": "814795859644809217", "username": "Aazadist", "description": "\u200f\u200e#Equality🌐\n\u200e#Humanity🌐\nخواهی نشوی همرنگ ، رسوای جماعت شو", "name": "Aazad 🏳️\u200d🌈 آزاد", "created_at": "20161230T11:30:45.000Z"}, {"id": "790375699638915072", "username": "Isaihstewart", "location": "Los Angeles, CA", "description": "Part time assistant manager at “Sheets and Things”", "name": "Dey got the henessey 🗣", "created_at": "20161024T02:13:46.000Z"}, {"id": "4846243708", "username": "williamvercetti", "location": "Virginia Beach, VA", "description": "vma. art. modelo papi. tpain to the dms.", "name": "William Vercetti", "created_at": "20160125T17:21:50.000Z"}, {"id": "1160723882", "username": "k_cawsey", "location": "Halifax, Nova Scotia", "description": "Chaucer, Malory, Arthur Tolkien. @Dal_English", "name": "Dr. Kathy Cawsey", "created_at": "20130208T17:15:30.000Z"}, {"id": "3789298943", "username": "solomonesther17", "location": "Lagos, Nigeria", "description": "FairBib Legal Practitioners", "name": "Esther Solomon", "created_at": "20150927T04:52:29.000Z"}, {"id": "14860380", "username": "Dejify", "location": "San Francisco", "description": "The Nigerian State is a festering boil that the world can't afford to ignore. Because, when it pops, its rancid ooze won't be pleasant nor easy to contain.", "name": "Buhari: Uber Ment (Dèjì Akọ́mọláfẹ́)", "created_at": "20080521T18:57:27.000Z"}, {"id": "1120883223070773248", "username": "Donna780780", "description": "", "name": "Donna Swidley", "created_at": "20190424T02:52:40.000Z"}, {"id": "1253742908487929858", "username": "Neros_sis", "location": "Florida", "description": "", "name": "@Nero's Fiddle GOP has a terrorism problem", "created_at": "20200424T17:50:00.000Z"}, {"id": "585090491", "username": "vickierae562", "location": "The LBC", "description": "That’s Right, I’m a Lefty 🤣 and I don’t feed trolls! #resist #DumpTrump #DitchMitch #LooseLindsey", "name": "Vickie Rae", "created_at": "20120519T21:00:28.000Z"}, {"id": "1262122532607574022", "username": "EmilySi49944255", "description": "", "name": "Skylar Aubrey", "created_at": "20200517T20:47:34.000Z"}, {"id": "1401663176", "username": "mdeHummelchen", "location": "Tief im Westen", "description": "Pflegewissenschaftlerin,Pflegeberaterin,Dozentin,Lächeln und winken...Pro Pflegekammer", "name": "Madame Hummelchen 💙", "created_at": "20130504T07:44:32.000Z"}, {"id": "2381808114", "username": "mommy97giraffe", "location": "Antifa HQs/Mom Division Office", "description": "Follower of Jesus, Mennonite mom&wife, lover of books, world, peo, poetry&art. 6 autoimmunes&fibro🥄ie Proud Mama Bear of 1gayD & 1pan&autistic son, in 20s🌈💖", "name": "Mennonite Mom(she/her)", "created_at": "20140310T08:51:02.000Z"}, {"id": "2362182011", "username": "rd2glry", "location": "Washington, DC", "description": "", "name": "ateachr", "created_at": "20140224T04:07:21.000Z"}, {"id": "974917494870700032", "username": "GiraffeOld", "location": "Arizona, USA", "description": "", "name": "old man giraffe", "created_at": "20180317T07:56:58.000Z"}, {"id": "830939480", "username": "redz041", "description": "", "name": "Jan Mouzone", "created_at": "20120918T12:18:36.000Z"}, {"id": "3346032292", "username": "kumccaig44", "description": "", "name": "Katrine McCaig", "created_at": "20150625T21:25:21.000Z"}, {"id": "80630279", "username": "LuluTheCalm", "location": "Green Grass & Puddles, Canada", "description": "Mischief in My Eyes & Adventure in My Soul. \nLet's Have a Laugh &, you know, Make the World a Better Place.😎 \nAus/Brit/Cdn🇨🇦", "name": "Lulu 🇨🇦#BeKindBeCalmBeSafe💞 😷 🎏", "created_at": "20091007T17:26:56.000Z"}, {"id": "3252437864", "username": "engelhardterin", "location": "Houston, TX || Lubbock, TX", "description": "24 || Texas Tech || ♀️ || she/her", "name": "Erin Engelhardt", "created_at": "20150622T07:26:28.000Z"}, {"id": "93797267", "username": "mcbeaz", "location": "he/him", "description": "black lives matter.", "name": "mike", "created_at": "20091201T05:28:58.000Z"}, {"id": "2585773107", "username": "michiganington", "location": "Washington, D.C. ", "description": "", "name": "Allyoop", "created_at": "20140606T02:12:33.000Z"}, {"id": "27857135", "username": "JackRayher", "location": "Northport, NY", "description": "Senior Marketing Executive\nLifelong Democrat\n#BidenHarris", "name": "Jack Rayher", "created_at": "20090331T12:12:03.000Z"}, {"id": "1078457644736827392", "username": "RobertCooper58", "description": "Bilingual community advocate. Father of five wonderful kids. Lifelong progressive and proud member of @TheDemCoalition. Early supporter of President @JoeBiden.", "name": "Robert Cooper 🌊", "created_at": "20181228T01:08:34.000Z"}, {"id": "206860139", "username": "MariaArtze", "location": "Münster, Deutschland", "description": "Nas trincheiras da ESO\nEmigrante a medio retornar. Womansplainer.\n(Sie vostede)\n\nTrans rights are human rights.", "name": "A Malvada Profe mediovacinada", "created_at": "20101023T22:27:26.000Z"}, {"id": "2903906123", "username": "lm1067", "location": "London, England", "description": "B A FINE ARTIST GRADUATED", "name": "Luis Pais", "created_at": "20141203T15:53:10.000Z"}, {"id": "64119853", "username": "IAM_SHAKESPEARE", "location": "Tweeting from the Grave", "description": "This bot has tweeted the complete works of Shakespeare (in order) 5 times over the last 12years. On hiatus for a bit. Created by @strebel", "name": "Willy Shakes", "created_at": "20090809T05:41:08.000Z"}, {"id": "3176623941", "username": "acastellich", "location": "Chicago, Il.", "description": "Abogado,Restaurantero,Immigrant , UVM. AD1 IPADE MBA. Restaurant Hospitality Industry, Chicago IL.", "name": "Alejandro Castelli", "created_at": "20150417T13:23:17.000Z"}, {"id": "782765390925533185", "username": "Diane_L_Espo", "location": "Florida, USA", "description": "", "name": "DianeEspo 🇺🇲🗽", "created_at": "20161003T02:13:07.000Z"}, {"id": "67471020", "username": "thedcma", "location": "Fort Lauderdale, FL", "description": "🖤💎 Style is the only substance I abuse.💎🖤 I’m just a 🌈 Gay 🐔Hillbilly 🔮Warlock 🛵 Riding a 👨🏻\u200d🎤Vaporwave Fever Dream #blacklivesmatter", "name": "Grace Kelly on Steiroids", "created_at": "20090821T00:32:37.000Z"}, {"id": "78797635", "username": "graciosodiablo", "description": "Too much of a good thing can be bad. So too little of a bad thing must be good. 160 characters or less of me should be perfect.", "name": "gracioso diabloint", "created_at": "20091001T03:59:16.000Z"}, {"id": "268314713", "username": "philppedurand", "location": "Auxerre", "description": "Je suis une personne gentille je milite pour la PMA. je suis militant communiste je suis aussi à l’association des Rosoirs je suis conseillé quartier", "name": "Philippe durand", "created_at": "20110318T14:37:36.000Z"}, {"id": "37996028", "username": "nicrawhide", "location": "Pinconning Michigan ", "description": "Just your average small town gay with big town sensibility!!", "name": "Nicholas Bean", "created_at": "20090505T19:20:37.000Z"}, {"id": "1236656342674407427", "username": "LadyJayPersists", "location": "Valhalla", "description": "USN Veteran | Shieldmaiden | Mom | Not here for a man, I have one | PTSD Warrior | My mind is a beautiful servant to a dangerous master", "name": "Jax", "created_at": "20200308T14:13:48.000Z"}, {"id": "171183306", "username": "dawndawnB", "location": "United States", "description": "Mrs. B, mother of 2 amazing kids, Substance Abuse Counselor, Volunteer, Music Lover. Born in DC but a VA Lo❤️er!", "name": "nwad", "created_at": "20100726T19:21:24.000Z"}, {"id": "817247846751555587", "username": "me2020_2021", "location": "Brisbane, Queensland", "description": "Proud Aussie, living a wonderful life with my wife, Australian Cricket 🏏👏,😷 🍺🥃 \U0001f9ae Alex", "name": "👀🏳️\u200d🌈 "A girl has no Name”', 'created_at': '20170106T05:54:05.000Z'}, {'id': '879459933988585472', 'username': 'Davecl3069', 'location': 'San Francisco Bay Area', 'description': 'proud of my views, life long learner,& hopefully, that guy!\n#LowerTheFlagForCovidVictims #VoteBlue #BLM #SupportThePlayers #LGBTQ #WeNeedToDoBetter #ResistStill', 'name': 'David', 'created_at': '20170626T22:02:42.000Z'}`
Code used
Regex:
def convert_to_json(file):
with open(file, "r", encoding="utf-8") as f:
x = f.read()
x = x.replace("-", "")
rx = re.compile(r'"[^"]*"(*SKIP)(*FAIL)|\'')
decoded = rx.sub('"', x)
literal_eval:
def open_json():
with open("data.json", "r", encoding="utf-8") as f:
f.read()
data = literal_eval(f)
data = json.loads(str(data))
What I would like to achieve
- Reformat the data to conform to JSON (this question) in order to be able to
- Build a dataframe with the relevant tweettext, user information and metadata (secondary goal) to be used in further analyses.
Thanks in advance for any suggestions! :)
ANSWER
Answered 2021-Jun-07 at 13:57if the '
that are causing the problem are only in the tweets and desciption you could try that
pre_tweet ="'text': '"
post_tweet = "', 'referenced_tweets':"
with open(file, encoding="utf-8") as f:
data=f.readlines()
output = []
errors = []
for line in data:
if pre_tweet in line and post_tweet in line :
first_part,rest = line.split(pre_tweet)
tweet,last_part = rest.split(post_tweet)
pre_tweet = first_part.replace('\'', '\"') + pre_tweet.replace('\'', '\"')
post_tweet = post_tweet.replace('\'', '\"') + last_part.replace('\'', '\"')
output.append(pre_tweet + tweet + post_tweet)
else :
errors.append(line)
and if errors is not empty, either it's because there are no tweets in the line (you can change the code a little bit to add it to your output), or what's after the tweet is not 'referenced_tweets'. In the second case, you may try to figure what could the changes be and modify the above code to add multiple post_tweet
then you may do the same with the description by changing pre and post tweet by what's usually before and after the description
The numbers of possible keys after the tweets/description must be finite, so it may take some time to figure out all the possibilities but in the end you should succeed
QUESTION
I'm plotting an a line plot and an image side by side. Here is my code and the current output:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#creating some random data
np.random.seed(1234)
df = pd.DataFrame(
{'px_last': 100 + np.random.randn(1000).cumsum()},
index=pd.date_range('2010-01-01', periods=1000, freq='B'),
)
fig, ax = plt.subplots(1,2)
ax[0].plot(df.index, df['px_last'])
#reading the image
url = 'https://raw.githubusercontent.com/kornelski/pngquant/master/test/img/test.png'
im = plt.imread(url)
implot = ax[1].imshow(im)
The image is scaled down and the lineplot is enlarged on the y-axis. I want both figures to have the same size vertically (y-axis) - I want to have the vertical height of the line plot reduced to have the same size of the image.
I tried changing the figure size of the subplots and editing the gridspec parameter as mentioned here but that only changes the width of the image, not the height. Any help is appreciated!
ANSWER
Answered 2021-Jun-09 at 08:17One simple solution is to use automatic aspect on the image.
implot = ax[1].imshow(im, aspect="auto")
This will make the shape of your image same as your other plot.
QUESTION
I have the following Canvas
class for drawing color in a given pixel:
class Canvas:
def __init__(self,color,pixel):
self.color = color
self.pixel = pixel
self.im = np.zeros((pixel,pixel,3))
self.im[:,:] = color
def show(self):
plt.imshow(self.im)
plt.axis("off")
This simple class draws a square with color, for example:
purple = np.array([0.5, 0.0, 0.5])
C = Canvas(purple, 2001) # 2001 x 2001 pixels
C.show()
C.add_disk((1001, 1001), 500, white)
C.show()
ANSWER
Answered 2021-Jun-09 at 04:36So with meshgrid, using the shape of the self.im
, we first find the coordinates of the X
and Y
values in the 2D image. Then, we find all values whose coordinates follows the circle rule ((X - Ox) ** 2 + (Y - Oy)**2 <= R**2
).
import matplotlib.pyplot as plt
import numpy as np
class Canvas:
def __init__(self,color,pixel):
self.color = color
self.pixel = pixel
self.im = np.zeros((pixel,pixel,3))
self.im[:,:] = color
def show(self):
plt.imshow(self.im)
plt.axis("off")
def add_disk(self, centroid,radius,color):
x, y = np.meshgrid(np.arange(self.im.shape[0]), np.arange(self.im.shape[1]))
circle_pixels = (x - centroid[0]) ** 2 + (y - centroid[1]) ** 2 <= radius ** 2
self.im[circle_pixels, ...] = color
purple = np.array([0.5, 0.0, 0.5])
C = Canvas(purple, 2001) # 2001 x 2001 pixels
C.show()
white = np.array([255, 255, 255])
C.add_disk((1001, 1001), 500, white)
C.show()
QUESTION
For my project, I am using tensorflow to predict handwritten user input.
Basically I used this dataset: https://www.kaggle.com/rishianand/devanagari-character-set, and created a model. I used matplotlib to see the images that were being produced by the pixels.
My code essentially works with training data, but i want to up it up a little. Through CV2, I created a GUI that allows users to draw a Nepali Letter. After this, I have branching that tells the program to save the image inside the computer.
This is a snippet of my code for it:
#creating a forloop to show the image
while True:
img=cv2.imshow('window', win) #showing the window
k= cv2.waitKey(1)
if k==ord('c'):
win= np.zeros((500,500,3), dtype='float64') #creating a new image
#saving the image as a file to then resize it
if k==ord('s'):
cv2.imwrite("nepali_character.jpg", win)
img= cv2.imread("nepali_character.jpg")
cv2.imshow('char', img)
#trying to resize the image using Pillow
size=(32,32)
#create a while loop(make the user print stuff until they print something that STOPS it)
im= Image.open("nepali_character.jpg")
out=im.resize(size)
l= out.save('resized.jpg')
imgout= cv2.imread('resized.jpg')
cv2.imshow("out", imgout)
#finding the pixels of the image, will be printed as a matrix
pix= cv2.imread('resized.jpg', 1)
print(pix)
if k==ord('q'): #if k is 27 then we break the window
cv2.destroyAllWindows()
break
I resize the image, because those were the dimensions of the data from the dataset.
Now my question is HOW do I predict what that letter is through tensorflow.
When I asked my teacher about it, he said to put it in my data file, and then treat it as a training data, and then look at the weights, and pick the greatest weight?
But I'm confused to go I can put this image into that data file?
If anyone has any suggestions of how to take user input and then predict, that would be greatly appreciated
ANSWER
Answered 2021-Jun-03 at 03:15Understand the dataset:
- the size of the image is 32 x 32
- there are 46 different characters/alphabets
['character_10_yna', 'character_11_taamatar', 'character_12_thaa', 'character_13_daa', 'character_14_dhaa', 'character_15_adna', 'character_16_tabala', 'character_17_tha', 'character_18_da', 'character_19_dha', 'character_1_ka', 'character_20_na', 'character_21_pa',
'character_22_pha', 'character_23_ba', 'character_24_bha', 'character_25_ma',
'character_26_yaw', 'character_27_ra', 'character_28_la', 'character_29_waw', 'character_2_kha', 'character_30_motosaw', 'character_31_petchiryakha', 'character_32_patalosaw', 'character_33_ha', 'character_34_chhya',
'character_35_tra', 'character_36_gya', 'character_3_ga', 'character_4_gha', 'character_5_kna', 'character_6_cha', 'character_7_chha', 'character_8_ja',
'character_9_jha', 'digit_0', 'digit_1', 'digit_2', 'digit_3', 'digit_4', 'digit_5', 'digit_6', 'digit_7', 'digit_8', 'digit_9']
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
import pathlib
dataDir = "/xx/xx/xx/xx/datasets/Devanagari/drive-download-20210601T224146Z-001/Train"
data_dir = keras.utils.get_file(dataDir, 'file://'+dataDir)
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.png')))
print(image_count)
batch_size = 32
img_height = 180 # scale it up for better performance
img_width = 180 # scale it up for better performance
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names) # 46 classes
For caching and normalization refer tensorflow tutorial
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
print(np.min(first_image), np.max(first_image))
model setup compile and training
num_classes = 46
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
this will result in as following( very promising!)
Epoch 10/10
1955/1955 [==============================] - 924s 472ms/step - loss: 0.0201 - accuracy: 0.9932 - val_loss: 0.2267 - val_accuracy: 0.9504
Save the model (this will take time to train, so better save the model)
!mkdir -p saved_model
model.save('saved_model/my_model')
load the model:
loaded_model = tf.keras.models.load_model('saved_model/my_model')
# Check its architecture
loaded_model.summary()
now the final task, get the prediction. One way is as following:
import cv2
im2=cv2.imread('datasets/Devanagari/drive-download-20210601T224146Z-001/Test/character_3_ga/3711.png')
im2=cv2.resize(im2, (180,180)) # resize to 180,180 as that is on which model is trained on
print(im2.shape)
img2 = tf.expand_dims(im2, 0) # expand the dims means change shape from (180, 180, 3) to (1, 180, 180, 3)
print(img2.shape)
predictions = loaded_model.predict(img2)
score = tf.nn.softmax(predictions[0]) # # get softmax for each output
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
) # get the np.argmax, means give me the index where probability is max, in this case it got 29. This answers the response
# you got from your instructor. that is "greatest weight"
(180, 180, 3)
(1, 180, 180, 3)
This image most likely belongs to character_3_ga with a 100.00 percent confidence.
another way is through online. the one you are trying to achive. the image shape need to be in (1, 180, 180, 3) for this example or can be (1, 32, 32, 3) if no resize was done. and then feed it to predict. somthing like below
out=im.resize(size)
out = tf.expand_dims(out, 0)
predictions = loaded_model.predict(out)
score = tf.nn.softmax(predictions[0]) # # get softmax for each output
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
QUESTION
I am implementing simple DQN algorithm using pytorch
, to solve the CartPole environment from gym
. I have been debugging for a while now, and I cant figure out why the model is not learning.
Observations:
- using
SmoothL1Loss
performs worse thanMSEloss
, but loss increases for both - smaller
LR
inAdam
does not work, I have tested using 0.0001, 0.00025, 0.0005 and default
Notes:
- I have debugged various parts of the algorithm individually, and can say with good confidence that the issue is in the
learn
function. I am wondering if this bug is due to me misunderstandingdetach
in pytorch or some other framework mistake im making. - I am trying to stick as close to the original paper as possible (linked above)
References:
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import gym
import numpy as np
class ReplayBuffer:
def __init__(self, mem_size, input_shape, output_shape):
self.mem_counter = 0
self.mem_size = mem_size
self.input_shape = input_shape
self.actions = np.zeros(mem_size)
self.states = np.zeros((mem_size, *input_shape))
self.states_ = np.zeros((mem_size, *input_shape))
self.rewards = np.zeros(mem_size)
self.terminals = np.zeros(mem_size)
def sample(self, batch_size):
indices = np.random.choice(self.mem_size, batch_size)
return self.actions[indices], self.states[indices], \
self.states_[indices], self.rewards[indices], \
self.terminals[indices]
def store(self, action, state, state_, reward, terminal):
index = self.mem_counter % self.mem_size
self.actions[index] = action
self.states[index] = state
self.states_[index] = state_
self.rewards[index] = reward
self.terminals[index] = terminal
self.mem_counter += 1
class DeepQN(nn.Module):
def __init__(self, input_shape, output_shape, hidden_layer_dims):
super(DeepQN, self).__init__()
self.input_shape = input_shape
self.output_shape = output_shape
layers = []
layers.append(nn.Linear(*input_shape, hidden_layer_dims[0]))
for index, dim in enumerate(hidden_layer_dims[1:]):
layers.append(nn.Linear(hidden_layer_dims[index], dim))
layers.append(nn.Linear(hidden_layer_dims[-1], *output_shape))
self.layers = nn.ModuleList(layers)
self.loss = nn.MSELoss()
self.optimizer = T.optim.Adam(self.parameters())
def forward(self, states):
for layer in self.layers[:-1]:
states = F.relu(layer(states))
return self.layers[-1](states)
def learn(self, predictions, targets):
self.optimizer.zero_grad()
loss = self.loss(input=predictions, target=targets)
loss.backward()
self.optimizer.step()
return loss
class Agent:
def __init__(self, epsilon, gamma, input_shape, output_shape):
self.input_shape = input_shape
self.output_shape = output_shape
self.epsilon = epsilon
self.gamma = gamma
self.q_eval = DeepQN(input_shape, output_shape, [64])
self.memory = ReplayBuffer(10000, input_shape, output_shape)
self.batch_size = 32
self.learn_step = 0
def move(self, state):
if np.random.random() < self.epsilon:
return np.random.choice(*self.output_shape)
else:
self.q_eval.eval()
state = T.tensor([state]).float()
action = self.q_eval(state).max(axis=1)[1]
return action.item()
def sample(self):
actions, states, states_, rewards, terminals = \
self.memory.sample(self.batch_size)
actions = T.tensor(actions).long()
states = T.tensor(states).float()
states_ = T.tensor(states_).float()
rewards = T.tensor(rewards).view(self.batch_size).float()
terminals = T.tensor(terminals).view(self.batch_size).long()
return actions, states, states_, rewards, terminals
def learn(self, state, action, state_, reward, done):
self.memory.store(action, state, state_, reward, done)
if self.memory.mem_counter < self.batch_size:
return
self.q_eval.train()
self.learn_step += 1
actions, states, states_, rewards, terminals = self.sample()
indices = np.arange(self.batch_size)
q_eval = self.q_eval(states)[indices, actions]
q_next = self.q_eval(states_).detach()
q_target = rewards + self.gamma * q_next.max(axis=1)[0] * (1 - terminals)
loss = self.q_eval.learn(q_eval, q_target)
self.epsilon *= 0.9 if self.epsilon > 0.1 else 1.0
return loss.item()
def learn(env, agent, episodes=500):
print('Episode: Mean Reward: Last Loss: Mean Step')
rewards = []
losses = [0]
steps = []
num_episodes = episodes
for episode in range(num_episodes):
done = False
state = env.reset()
total_reward = 0
n_steps = 0
while not done:
action = agent.move(state)
state_, reward, done, _ = env.step(action)
loss = agent.learn(state, action, state_, reward, done)
state = state_
total_reward += reward
n_steps += 1
if loss:
losses.append(loss)
rewards.append(total_reward)
steps.append(n_steps)
if episode % (episodes // 10) == 0 and episode != 0:
print(f'{episode:5d} : {np.mean(rewards):5.2f} '
f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
rewards = []
losses = [0]
steps = []
print(f'{episode:5d} : {np.mean(rewards):5.2f} '
f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')
return losses, rewards
if __name__ == '__main__':
env = gym.make('CartPole-v1')
agent = Agent(1.0, 1.0,
env.observation_space.shape,
[env.action_space.n])
learn(env, agent, 500)
ANSWER
Answered 2021-Jun-02 at 17:39The main problem I think is the discount factor, gamma. You are setting it to 1.0, which mean that you are giving the same weight to the future rewards as the current one. Usually in reinforcement learning we care more about the immediate reward than the future, so gamma should always be less than 1.
Just to give it a try I set gamma = 0.99
and run your code:
Episode: Mean Reward: Last Loss: Mean Step
100 : 34.80 : 0.34: 34.80
200 : 40.42 : 0.63: 40.42
300 : 65.58 : 1.78: 65.58
400 : 212.06 : 9.84: 212.06
500 : 407.79 : 19.49: 407.79
As you can see the loss still increases (even if not as much as before), but so does the reward. You should consider that loss here is not a good metric for the performance, because you have a moving target. You can reduce the instability of the target by using a target network. With additional parameter tuning and a target network one could probably make the loss even more stable.
Also generally note that in reinforcement learning the loss value is not as important as it is in supervised; a decrease in loss does not always imply an improvement in performance, and vice versa.
The problem is that the Q target is moving while the training steps happen; as the agent plays, predicting the correct sum of rewards gets extremely hard (e.g. more states and rewards explored means higher reward variance), so the loss increases. This is even clearer in more complex the environments (more states, variated rewards, etc).
At the same time the Q network is getting better at approximating the Q values for each action, so the rewards (could) increase.
QUESTION
I am building new node project. while installing bcrypt
package i got error given below:
> bcrypt@5.0.1 install /media/keval/E: Drive/projects/MERN Projects/FMS/node_modules/bcrypt
> node-pre-gyp install --fallback-to-build
sh: 1: node-pre-gyp: not found
npm WARN ims@1.0.0 No repository field.
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! bcrypt@5.0.1 install: `node-pre-gyp install --fallback-to-build`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the bcrypt@5.0.1 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/keval/.npm/_logs/2021-05-29T09_24_30_366Z-debug.log
all other packages are installed properly without error but i got error while installing bcrypt
only.
i have installed node-pre-gyp. i found one solution on this problem
npm uninstall bcrypt --save
npm install bcrypt@5 --save
but this is not working at all. what i am doing wrong ?
ANSWER
Answered 2021-May-30 at 11:44Yes i also faced this problem, but don't worry you can install bcryptjs
insted of bcrypt
. it will same as bcrypt
. first of all run this npm unistall bcrypt
then npm install bcryptjs
. it will work. but make sure you change for import package like this import bcrypt from 'bcryptjs';
QUESTION
Im trying to display a greyscale image (made with matplotlib) in a tkinter Frame. The image itself is displayed, but only with some whitespace between each side of the actual image and the tkinter Frame border. Any ideas on how i can avoid drawing the image with the whitespace?
My code that i currently use:
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
fig = Figure(figsize=(20, 20))
ax = fig.add_subplot(111)
ax.imshow(self.data, cmap="Greys", interpolation=None)
ax.axis("off")
canvas = FigureCanvasTkAgg(fig, master=self.frame)
canvas.get_tk_widget().pack()
canvas.draw()
ANSWER
Answered 2021-May-28 at 17:55It looks like the whitespace exists both in the plot and in the GUI, so the problem lies on the matplotlib side. The fig.subplots_adjust(left=0.01, right=0.99, top=0.99, bottom=0.01)
method can reduce this whitespace.
QUESTION
I tried the code provided bellow to segment each digit in this image and put a contour around it then crop it out but it's giving me bad results, I'm not sure what I need to change or work on.
The best idea I can think of right now is filtering the 4 largest contours in the image except the image contour itself.
The code I'm working with:
import sys
import numpy as np
import cv2
im = cv2.imread('marks/mark28.png')
im3 = im.copy()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(blur, 255, 1, 1, 11, 2)
################# Now finding Contours ###################
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
samples = np.empty((0, 100))
responses = []
keys = [i for i in range(48, 58)]
for cnt in contours:
if cv2.contourArea(cnt) > 50:
[x, y, w, h] = cv2.boundingRect(cnt)
if h > 28:
cv2.rectangle(im, (x, y), (x + w, y + h), (0, 0, 255), 2)
roi = thresh[y:y + h, x:x + w]
roismall = cv2.resize(roi, (10, 10))
cv2.imshow('norm', im)
key = cv2.waitKey(0)
if key == 27: # (escape to quit)
sys.exit()
elif key in keys:
responses.append(int(chr(key)))
sample = roismall.reshape((1, 100))
samples = np.append(samples, sample, 0)
responses = np.array(responses, np.float32)
responses = responses.reshape((responses.size, 1))
print
"training complete"
np.savetxt('generalsamples.data', samples)
np.savetxt('generalresponses.data', responses)
I need to change the if condition on height probably but more importantly I need if conditions to get the 4 largest contours on the image. Sadly, I haven't managed to find what I'm supposed to be filtering.
This is the kind of results I get, I'm trying to escape getting those inner contours on the digit "zero"
Unprocessed images as requested: example 1 example 2
All I need is an idea on what I should filter for, don't write code please. Thank you community.
ANSWER
Answered 2021-May-25 at 23:48You almost have it. You have multiple bounding rectangles on each digit because you are retrieving every contour (external and internal). You are using cv2.findContours
in RETR_LIST
mode, which retrieves all the contours, but doesn't create any parent-child relationship. The parent-child relationship is what discriminates between inner (child) and outter (parent) contours, OpenCV calls this "Contour Hierarchy". Check out the docs for an overview of all hierarchy modes. Of particular interest is RETR_EXTERNAL
mode. This mode fetches only external contours - so you don't get multiple contours and (by extension) multiple bounding boxes for each digit!
Also, it seems that your images have a red border. This will introduce noise while thresholding the image, and this border might be recognized as the top-level outer contour - thus, every other contour (the children of this parent contour) will not be fetched in RETR_EXTERNAL
mode. Fortunately, the border position seems constant and we can eliminate it with a simple flood-fill, which pretty much fills a blob of a target color with a substitute color.
Let's check out the reworked code:
# Imports:
import cv2
import numpy as np
# Set image path
path = "D://opencvImages//"
fileName = "rhWM3.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The first step is to get the binary image with all the target blobs/contours. This is the result so far:
Notice the border is white. We have to delete this, a simple flood-filling at position (x=0,y=0)
witch black color will suffice:
# Flood-fill border, seed at (0,0) and use black (0) color:
cv2.floodFill(binaryImage, None, (0, 0), 0)
This is the filled image, no more border!
Now we can retrieve the external, outermost contours in RETR_EXTERNAL
mode:
# Get each bounding box
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Notice you also get each contour's hierarchy
as second return value. This is useful if you want to check out if the current contour is a parent or a child. Alright, let's loop through the contours and get their bounding boxes. If you want to ignore contours below a minimum area threshold, you can also implement an area filter:
# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):
# Get the bounding rectangle of the current contour:
boundRect = cv2.boundingRect(c)
# Get the bounding rectangle data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 10
# Filter blobs by area:
if rectArea > minArea:
# Draw bounding box:
color = (0, 255, 0)
cv2.rectangle(inputImageCopy, (int(rectX), int(rectY)),
(int(rectX + rectWidth), int(rectY + rectHeight)), color, 2)
cv2.imshow("Bounding Boxes", inputImageCopy)
# Crop bounding box:
currentCrop = inputImage[rectY:rectY+rectHeight,rectX:rectX+rectWidth]
cv2.imshow("Current Crop", currentCrop)
cv2.waitKey(0)
The last three lines of the above snippet crop and show the current digit. This is the result of detected bounding boxes for both of your images (the bounding boxes are colored in green, the red border is part of the input images):
QUESTION
I have been trying to run the following code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread("G:\myfiles\frames\frame1.jpg",0)
image = [img]
for i in range(1):
plt.subplot(1, 1, i+1), plt.imshow(image[i], 'gray')
plt.xticks([]),plt.yticks([])
plt.show()
I am getting the following error
TypeError Traceback (most recent call last)
in
1 image = [img]
2 for i in range(1):
----> 3 plt.subplot(1, 1, i+1), plt.imshow(image[i], 'gray')
4 plt.xticks([]),plt.yticks([])
5
E:\anaconda\programme files\lib\site-packages\matplotlib\pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, filternorm, filterrad, resample, url, data, **kwargs)
2728 filternorm=filternorm, filterrad=filterrad, resample=resample,
2729 url=url, **({"data": data} if data is not None else {}),
-> 2730 **kwargs)
2731 sci(__ret)
2732 return __ret
E:\anaconda\programme files\lib\site-packages\matplotlib\__init__.py in inner(ax, data, *args, **kwargs)
1445 def inner(ax, *args, data=None, **kwargs):
1446 if data is None:
-> 1447 return func(ax, *map(sanitize_sequence, args), **kwargs)
1448
1449 bound = new_sig.bind(ax, *args, **kwargs)
E:\anaconda\programme files\lib\site-packages\matplotlib\axes\_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, filternorm, filterrad, resample, url, **kwargs)
5521 resample=resample, **kwargs)
5522
-> 5523 im.set_data(X)
5524 im.set_alpha(alpha)
5525 if im.get_clip_path() is None:
E:\anaconda\programme files\lib\site-packages\matplotlib\image.py in set_data(self, A)
701 not np.can_cast(self._A.dtype, float, "same_kind")):
702 raise TypeError("Image data of dtype {} cannot be converted to "
--> 703 "float".format(self._A.dtype))
704
705 if self._A.ndim == 3 and self._A.shape[-1] == 1:
TypeError: Image data of dtype object cannot be converted to float
This is the specific error, seems like matplotlib causes the error
TypeError: Image data of dtype object cannot be converted to float
Why this error arises what is the possible fix for this?
ANSWER
Answered 2021-May-24 at 18:30You'll need to escape the backslashes in your path, as backslashes are special characters used to escape special characters.
You can try:
img = cv2.imread("G:\\myfiles\\frames\\frame1.jpg", 0)
Or you can use an r
-string:
img = cv2.imread(r"G:\myfiles\frames\frame1.jpg", 0)
Lastly, you can try using forward slashes instead of backslashes:
img = cv2.imread("G:/myfiles/frames/frame1.jpg", 0)
QUESTION
I have to compress some folders every month that always start with the number of the referenced month followed by a -
.
For example:
April: folder is 04- ??????
May: folder is 05- ???????
I just know the first part of the folder name. The rest of the folder name is always different.
I´m stuck here:
@echo off
for /f "delims=" %%G In ('PowerShell -Command "&{((Get-Date).AddMonths(-1)).ToString('yyyy')}"') do set "ano=%%G"
for /f "delims=" %%A In ('PowerShell -Command "&{((Get-Date).AddMonths(-1)).ToString('MM-')}"') do set "mes=%%A"
set "winrar=C:\Program Files\winrar"
"%winrar%\rar.exe" a -ibck -ep1 "C:\FOLDER 1\FOLDER 2\FOLDER 3\%ano%\????????.rar"
I just have the information about the first name part of the folder like 04-
.
How can I specify Rar.exe
to compress the folder by only the first folder name?
ANSWER
Answered 2021-May-24 at 17:22I recommend to read the answers on Time is set incorrectly after midnight for understanding the first FOR command line of the batch code below to get current year and month without using PowerShell:
@echo off
setlocal EnableExtensions DisableDelayedExpansion
pushd "%~dp0"
for /F "tokens=1,2 delims=/" %%I in ('%SystemRoot%\System32\robocopy.exe "%SystemDrive%\|" . /NJH') do set "Year=%%I" & set "Month=%%J" & goto CheckFolder
:CheckFolder
for /D %%I in (%Month%-*) do goto CompressFolders
echo INFO: There is no non-hidden folder with name: %Month%-*
goto EndBatch
:CompressFolders
set "ArchiveFolder=C:\FOLDER 1\FOLDER 2\FOLDER 3\%Year%"
md "%ArchiveFolder%" 2>nul
if not exist "%ArchiveFolder%\" echo ERROR: Failed to create folder: "%ArchiveFolder%"& goto EndBatch
"C:\Program Files\WinRAR\Rar.exe" a -cfg- -ep1 -idq -m5 -r -y "%ArchiveFolder%\%Month%.rar" "%Month%-*\*"
:EndBatch
popd
endlocal
That batch file compresses all folders in directory of the batch file with name starting with current month and a hyphen into a RAR archive file with current month as archive file name. So if the batch file directory contains, for example, the folders 05-Folder
and 05-OtherFolder
, the RAR archive file 05.rar
contains these two folders with all its files and subfolders.
It is of course also possible to compress each folder with name starting with current month and a hyphen into a separate RAR archive file by using the following code:
@echo off
setlocal EnableExtensions DisableDelayedExpansion
pushd "%~dp0"
for /F "tokens=1,2 delims=/" %%I in ('%SystemRoot%\System32\robocopy.exe "%SystemDrive%\|" . /NJH') do set "Year=%%I" & set "Month=%%J" & goto CheckFolder
:CheckFolder
for /D %%I in (%Month%-*) do goto CompressFolders
echo INFO: There is no non-hidden folder with name: %Month%-*
goto EndBatch
:CompressFolders
set "ArchiveFolder=C:\FOLDER 1\FOLDER 2\FOLDER 3\%Year%"
md "%ArchiveFolder%" 2>nul
if not exist "%ArchiveFolder%\" echo ERROR: Failed to create folder: "%ArchiveFolder%"& goto EndBatch
for /D %%I in (%Month%-*) do "C:\Program Files\WinRAR\Rar.exe" a -cfg- -ep1 -idq -m5 -r -y "%ArchiveFolder%\%%I.rar" "%%I\"
:EndBatch
popd
endlocal
That batch file creates the RAR archive files 05-Folder.rar
and 05-OtherFolder.rar
with the folder names 05-Folder
and 05-OtherFolder
not included in the appropriate RAR archive file because of the backslash in "%%I\"
. The folder names 05-Folder
and 05-OtherFolder
would be included in the archive files on using just "%%I"
.
Please double click on file C:\Program Files\WinRAR\Rar.txt
to open this text file and read it from top to bottom. It is the manual of console version Rar.exe
. The switch -ibck
is not in this manual because that is an option of the GUI version WinRAR.exe
to run in background which means minimized to system tray. The Rar.exe
command line switch -idq
is used instead to create the archive files in quiet mode showing only errors.
For understanding the used commands and how they work, open a command prompt window, execute there the following commands, and read entirely all help pages displayed for each command very carefully.
echo /?
endlocal /?
for /?
goto /?
if /?
md /?
popd /?
pushd /?
robocopy /?
set /?
setlocal /?
See also single line with multiple commands using Windows batch file for an explanation of operator &
used multiple times in the two batch files above.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ims
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page