kandi background
Explore Kits

Java-Tutorial | 【Java工程师面试复习指南】本仓库涵盖大部分Java程序员所需要掌握的核心知识,整合了互联网上的很多优质Java技术文章,力求打造为最完整最实用的Java开发者学习指南,如果对你有帮助,给个star告诉我吧,谢谢!

 by   h2pl Java Version: Current License: No License

 by   h2pl Java Version: Current License: No License

Download this library from

kandi X-RAY | Java-Tutorial Summary

Java-Tutorial is a Java library. Java-Tutorial has no bugs, it has no vulnerabilities, it has build file available and it has medium support. You can download it from GitHub.
【Java工程师面试复习指南】本仓库涵盖大部分Java程序员所需要掌握的核心知识,整合了互联网上的很多优质Java技术文章,力求打造为最完整最实用的Java开发者学习指南,如果对你有帮助,给个star告诉我吧,谢谢!
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • Java-Tutorial has a medium active ecosystem.
  • It has 5001 star(s) with 1278 fork(s). There are 166 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 20 open issues and 8 have been closed. On average issues are closed in 65 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of Java-Tutorial is current.
Java-Tutorial Support
Best in #Java
Average in #Java
Java-Tutorial Support
Best in #Java
Average in #Java

quality kandi Quality

  • Java-Tutorial has 0 bugs and 4 code smells.
Java-Tutorial Quality
Best in #Java
Average in #Java
Java-Tutorial Quality
Best in #Java
Average in #Java

securitySecurity

  • Java-Tutorial has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • Java-Tutorial code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
Java-Tutorial Security
Best in #Java
Average in #Java
Java-Tutorial Security
Best in #Java
Average in #Java

license License

  • Java-Tutorial does not have a standard license declared.
  • Check the repository for any license declaration and review the terms closely.
  • Without a license, all rights are reserved, and you cannot use the library in your applications.
Java-Tutorial License
Best in #Java
Average in #Java
Java-Tutorial License
Best in #Java
Average in #Java

buildReuse

  • Java-Tutorial releases are not available. You will need to build from source code and install.
  • Build file is available. You can build the component from source.
  • Java-Tutorial saves you 23 person hours of effort in developing the same functionality from scratch.
  • It has 65 lines of code, 1 functions and 4 files.
  • It has low code complexity. Code complexity directly impacts maintainability of the code.
Java-Tutorial Reuse
Best in #Java
Average in #Java
Java-Tutorial Reuse
Best in #Java
Average in #Java
Top functions reviewed by kandi - BETA

kandi has reviewed Java-Tutorial and discovered the below as its top functions. This is intended to give you an instant insight into Java-Tutorial implemented functionality, and help decide if they suit your requirements.

  • Main program .

Java-Tutorial Key Features

【Java工程师面试复习指南】本仓库涵盖大部分Java程序员所需要掌握的核心知识,整合了互联网上的很多优质Java技术文章,力求打造为最完整最实用的Java开发者学习指南,如果对你有帮助,给个star告诉我吧,谢谢!

Background removal from images with OpenCV in Android

copy iconCopydownload iconDownload
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)
# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)
# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)
# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")
# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")
# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)
# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
-----------------------
import cv2
import numpy as np

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
    
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank

img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
import cv2
import numpy as np
def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img_canny = cv2.Canny(img_gray, 10, 20)
    kernel = np.ones((13, 13))
    img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
    return cv2.erode(img_dilate, kernel, iterations=1)
def get_mask(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    blank = np.zeros(img.shape[:2]).astype('uint8')
    for cnt in contours:
        if cv2.contourArea(cnt) > 500:
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, peri * 0.004, True)
            cv2.drawContours(blank, [approx], -1, 255, -1) 
    return blank
img = cv2.imread("crystal.jpg")
img_masked = cv2.bitwise_and(img, img, mask=get_mask(img))

cv2.imshow("Masked", img_masked)
cv2.waitKey(0)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)
img = cv2.imread("crystal.jpg")
img_masked = cv2.merge(cv2.split(img) + [get_mask(img)])
cv2.imwrite("masked_crystal.png", img_masked)

VS Code & WSL2 - specify Java Language Level to 1.8

copy iconCopydownload iconDownload
{
  "name": "Spring Project ???",
  "type": "java",
  "request": "launch",
  "cwd": "${workspacefolder}",
  "console": "internalConsole",
  "mainClass": "br.com.meuProject.???",
  "projectName": "???",
  "args": []
}
"java.semanticHighlighting.enabled": true,
"java.configuration.checkProjectSettingsExclusions": false,
"java.home": "C:\\Users\\name\\.sdkman\\candidates\\java\\current",
"java.jdt.ls.vmargs": "-XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true -Xmx1G -Xms100m -javaagent:\"/home/user/.vscode/extensions/gabrielbb.vscode-lombok-1.0.1/server/lombok.jar\"",
-----------------------
{
  "name": "Spring Project ???",
  "type": "java",
  "request": "launch",
  "cwd": "${workspacefolder}",
  "console": "internalConsole",
  "mainClass": "br.com.meuProject.???",
  "projectName": "???",
  "args": []
}
"java.semanticHighlighting.enabled": true,
"java.configuration.checkProjectSettingsExclusions": false,
"java.home": "C:\\Users\\name\\.sdkman\\candidates\\java\\current",
"java.jdt.ls.vmargs": "-XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true -Xmx1G -Xms100m -javaagent:\"/home/user/.vscode/extensions/gabrielbb.vscode-lombok-1.0.1/server/lombok.jar\"",
-----------------------
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
    <source>${maven.compiler.source}</source>
    <target>${maven.compiler.target}</target>
</configuration>
-----------------------
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.8">

Escaping references - how to legitimately update object data?

copy iconCopydownload iconDownload
record Customer( String name , String email ) {}
Map< String , Customer > customersByNameReadOnly = Map.copyOf( otherMap ) ;
LocalDate today = LocalDate.now() ;
LocalDate tomorrow = today.plusDays( 1 ) ;
-----------------------
record Customer( String name , String email ) {}
Map< String , Customer > customersByNameReadOnly = Map.copyOf( otherMap ) ;
LocalDate today = LocalDate.now() ;
LocalDate tomorrow = today.plusDays( 1 ) ;
-----------------------
record Customer( String name , String email ) {}
Map< String , Customer > customersByNameReadOnly = Map.copyOf( otherMap ) ;
LocalDate today = LocalDate.now() ;
LocalDate tomorrow = today.plusDays( 1 ) ;

How to Run Java Module class (java 9 jigsaw project) with third party library (jar files)?

copy iconCopydownload iconDownload
java --module-path newout;libs
java --module-path newout:libs
$ java --module-path newout
$ libs --module com.example.trial/com.example.trial.CreateProduct
-----------------------
java --module-path newout;libs
java --module-path newout:libs
$ java --module-path newout
$ libs --module com.example.trial/com.example.trial.CreateProduct
-----------------------
java --module-path newout;libs
java --module-path newout:libs
$ java --module-path newout
$ libs --module com.example.trial/com.example.trial.CreateProduct

Java Regex - capture string with single dollar, but not when it has two successive ones

copy iconCopydownload iconDownload
%([^%.]+)%|(?<!\$)\$(?:\{([^{}]+)\}|([^$.]+))
String regex = "%([^%.]+)%|(?<!\\$)\\$(?:\\{([^\\{}]+)\\}|([^$.\\s]+))";
String string = "%ABC%\n$ABC.\n$ABC$XYZ  ${ABC}\n\n$$EFG $${EFG}.";
Pattern pattern = Pattern.compile(regex, Pattern.MULTILINE);
Matcher m = pattern.matcher(string);
List<String> results = new ArrayList<>();
while (m.find()) {
    results.add(Objects.toString(m.group(1),"") + 
        Objects.toString(m.group(2),"") + 
        Objects.toString(m.group(3),""));
}
System.out.println(results); // => [ABC, ABC, ABC, XYZ, ABC]
-----------------------
%([^%.]+)%|(?<!\$)\$(?:\{([^{}]+)\}|([^$.]+))
String regex = "%([^%.]+)%|(?<!\\$)\\$(?:\\{([^\\{}]+)\\}|([^$.\\s]+))";
String string = "%ABC%\n$ABC.\n$ABC$XYZ  ${ABC}\n\n$$EFG $${EFG}.";
Pattern pattern = Pattern.compile(regex, Pattern.MULTILINE);
Matcher m = pattern.matcher(string);
List<String> results = new ArrayList<>();
while (m.find()) {
    results.add(Objects.toString(m.group(1),"") + 
        Objects.toString(m.group(2),"") + 
        Objects.toString(m.group(3),""));
}
System.out.println(results); // => [ABC, ABC, ABC, XYZ, ABC]

Java regex (java.util.regex). Search for dollar sign

copy iconCopydownload iconDownload
String search = "/bla/$V_N.$XYZ.bla";
String pattern = "[%$]([^%.$]*)";
Matcher matcher = Pattern.compile(pattern).matcher(search);
while (matcher.find()){
    System.out.println(matcher.group(1)); 
} // => V_N, XYZ

What is the basic structure of a Julia Program?

copy iconCopydownload iconDownload
println("Hello world!")
print("Hello world!")
print *, "Hello world!"
#include<stdio.h>

int main()
{
    return printf("Hello World!\n");
}
public class HelloWorld
{
    public static void main(String[] args)
    {
        System.out.println("Hello world!");
    }
}
-----------------------
println("Hello world!")
print("Hello world!")
print *, "Hello world!"
#include<stdio.h>

int main()
{
    return printf("Hello World!\n");
}
public class HelloWorld
{
    public static void main(String[] args)
    {
        System.out.println("Hello world!");
    }
}
-----------------------
println("Hello world!")
print("Hello world!")
print *, "Hello world!"
#include<stdio.h>

int main()
{
    return printf("Hello World!\n");
}
public class HelloWorld
{
    public static void main(String[] args)
    {
        System.out.println("Hello world!");
    }
}
-----------------------
println("Hello world!")
print("Hello world!")
print *, "Hello world!"
#include<stdio.h>

int main()
{
    return printf("Hello World!\n");
}
public class HelloWorld
{
    public static void main(String[] args)
    {
        System.out.println("Hello world!");
    }
}
-----------------------
println("Hello world!")
print("Hello world!")
print *, "Hello world!"
#include<stdio.h>

int main()
{
    return printf("Hello World!\n");
}
public class HelloWorld
{
    public static void main(String[] args)
    {
        System.out.println("Hello world!");
    }
}

Java String replace - non-capturing group captures

copy iconCopydownload iconDownload
 System.out.println(
            initial.replaceAll("\\d{3}-\\d{3}(\\-\\d{4})", "XXX-XXX$1"));

How to modify a static private field during tests?

copy iconCopydownload iconDownload
public class ServiceExecutorTest {

    @Test
    public void doSomethingTest() throws NoSuchFieldException, IllegalAccessException {
        Field field = null;
        List<Service> oldList = null;
        try {
            field = ServiceExecutor.class.getDeclaredField("services");
            field.setAccessible(true);
            Field modifiersField = Field.class.getDeclaredField("modifiers");
            modifiersField.setAccessible(true);
            modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);

            final Service serviceMock1 = mock(Service.class);
            final Service serviceMock2 = mock(Service.class);
            final List<Service> serviceMockList = Arrays.asList(serviceMock1, serviceMock2);
            oldList = (List<Service>) field.get(null);
            field.set(null, serviceMockList);
            ServiceExecutor.execute();
            // or testing the controller
            // FirstController firstController = new FirstController();
            // firstController.doSomething();
            verify(serviceMock1, times(1)).execute();
            verify(serviceMock2, times(1)).execute();
        } finally {
            // restore original value
            if (field != null && oldList != null) {
                field.set(null, oldList);
            }
        }
    }

    static class Service {
        void execute() {
            throw new RuntimeException("Should not execute");
        }
    }

    static class ServiceExecutor {

        private static final List<Service> services = Arrays.asList(
                new Service());

        public static void execute() {
            for (Service s : services) {
                s.execute();
            }
        }
    }
}
-----------------------
private static final List<Service> services = Arrays.asList(...)
private final ServicesProvider serviceProvider; 
-----------------------
private static final List<Service> services = Arrays.asList(...)
private final ServicesProvider serviceProvider; 
-----------------------
@RunWith(PowerMockRunner.class)
@PrepareForTest({ ServiceExecutor.class })
@SuppressStaticInitializationFor("com.gpcoder.staticblock.ServiceExecutor")
public class FirstControllerTest {

    @Before
    public void prepareForTest() throws Exception {
        PowerMockito.mockStatic(ServiceExecutor.class);
        PowerMockito.doNothing().when(ServiceExecutor.class);
    }

    @Test
    public void doSomethingTest() {
        FirstController firstController = new FirstController();
        firstController.doSomething();
        PowerMockito.verifyStatic(ServiceExecutor.class, Mockito.times(1));
    }
}

Is Java 8 stream laziness useless in practice?

copy iconCopydownload iconDownload
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
    .peek(System.out::println)
    .mapToObj(String::valueOf)
    .peek(System.out::println)
    .findFirst()
    .ifPresent(System.out::println);

//example 2: check if any value has an even key
boolean valid = records.
    .map(this::heavyConversion)
    .filter(this::checkWithWebService)
    .mapToInt(Record::getKey)
    .anyMatch(i -> i % 2 == 0)
0
0
0
-----------------------
//example 1: print first element of 1000 after transformations
IntStream.range(0, 1000)
    .peek(System.out::println)
    .mapToObj(String::valueOf)
    .peek(System.out::println)
    .findFirst()
    .ifPresent(System.out::println);

//example 2: check if any value has an even key
boolean valid = records.
    .map(this::heavyConversion)
    .filter(this::checkWithWebService)
    .mapToInt(Record::getKey)
    .anyMatch(i -> i % 2 == 0)
0
0
0
-----------------------
// Some lengthy computation
private static int doStuff(int i) {
    try { Thread.sleep(1000); } catch (InterruptedException e) { }
    return i;
}

public static OptionalInt findFirstGreaterThanStream(int value) {
    return IntStream
            .of(MY_INTS)
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}

public static OptionalInt findFirstGreaterThanFor(int value) {
    for (int i = 0; i < MY_INTS.length; i++) {
        int mapped = Main.doStuff(MY_INTS[i]);
        if(mapped > value){
            return OptionalInt.of(mapped);
        }
    }
    return OptionalInt.empty();
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanFor(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);
}
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
    return IntStream
            .of(MY_INTS)
            .parallel()
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
    AtomicInteger counter = new AtomicInteger(0);

    CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
        while(counter.get() != MY_INTS.length-1);
        return OptionalInt.empty();
    });

    for (int i = 0; i < MY_INTS.length; i++) {
        final int current = MY_INTS[i];
        executor.execute(() -> {
            int mapped = Main.doStuff(current);
            if(mapped > value){
                cf.complete(OptionalInt.of(mapped));
            } else {
                counter.incrementAndGet();
            }
        });
    }

    try {
        return cf.get();
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        return OptionalInt.empty();
    }
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    ExecutorService executor = Executors.newFixedThreadPool(10);
    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelFor(5678, executor));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    executor.shutdown();
    executor.awaitTermination(10, TimeUnit.SECONDS);
    executor.shutdownNow();
}
-----------------------
// Some lengthy computation
private static int doStuff(int i) {
    try { Thread.sleep(1000); } catch (InterruptedException e) { }
    return i;
}

public static OptionalInt findFirstGreaterThanStream(int value) {
    return IntStream
            .of(MY_INTS)
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}

public static OptionalInt findFirstGreaterThanFor(int value) {
    for (int i = 0; i < MY_INTS.length; i++) {
        int mapped = Main.doStuff(MY_INTS[i]);
        if(mapped > value){
            return OptionalInt.of(mapped);
        }
    }
    return OptionalInt.empty();
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanFor(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);
}
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
    return IntStream
            .of(MY_INTS)
            .parallel()
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
    AtomicInteger counter = new AtomicInteger(0);

    CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
        while(counter.get() != MY_INTS.length-1);
        return OptionalInt.empty();
    });

    for (int i = 0; i < MY_INTS.length; i++) {
        final int current = MY_INTS[i];
        executor.execute(() -> {
            int mapped = Main.doStuff(current);
            if(mapped > value){
                cf.complete(OptionalInt.of(mapped));
            } else {
                counter.incrementAndGet();
            }
        });
    }

    try {
        return cf.get();
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        return OptionalInt.empty();
    }
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    ExecutorService executor = Executors.newFixedThreadPool(10);
    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelFor(5678, executor));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    executor.shutdown();
    executor.awaitTermination(10, TimeUnit.SECONDS);
    executor.shutdownNow();
}
-----------------------
// Some lengthy computation
private static int doStuff(int i) {
    try { Thread.sleep(1000); } catch (InterruptedException e) { }
    return i;
}

public static OptionalInt findFirstGreaterThanStream(int value) {
    return IntStream
            .of(MY_INTS)
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}

public static OptionalInt findFirstGreaterThanFor(int value) {
    for (int i = 0; i < MY_INTS.length; i++) {
        int mapped = Main.doStuff(MY_INTS[i]);
        if(mapped > value){
            return OptionalInt.of(mapped);
        }
    }
    return OptionalInt.empty();
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanFor(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);
}
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
    return IntStream
            .of(MY_INTS)
            .parallel()
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
    AtomicInteger counter = new AtomicInteger(0);

    CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
        while(counter.get() != MY_INTS.length-1);
        return OptionalInt.empty();
    });

    for (int i = 0; i < MY_INTS.length; i++) {
        final int current = MY_INTS[i];
        executor.execute(() -> {
            int mapped = Main.doStuff(current);
            if(mapped > value){
                cf.complete(OptionalInt.of(mapped));
            } else {
                counter.incrementAndGet();
            }
        });
    }

    try {
        return cf.get();
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        return OptionalInt.empty();
    }
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    ExecutorService executor = Executors.newFixedThreadPool(10);
    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelFor(5678, executor));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    executor.shutdown();
    executor.awaitTermination(10, TimeUnit.SECONDS);
    executor.shutdownNow();
}
-----------------------
// Some lengthy computation
private static int doStuff(int i) {
    try { Thread.sleep(1000); } catch (InterruptedException e) { }
    return i;
}

public static OptionalInt findFirstGreaterThanStream(int value) {
    return IntStream
            .of(MY_INTS)
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}

public static OptionalInt findFirstGreaterThanFor(int value) {
    for (int i = 0; i < MY_INTS.length; i++) {
        int mapped = Main.doStuff(MY_INTS[i]);
        if(mapped > value){
            return OptionalInt.of(mapped);
        }
    }
    return OptionalInt.empty();
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanFor(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);
}
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
    return IntStream
            .of(MY_INTS)
            .parallel()
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
    AtomicInteger counter = new AtomicInteger(0);

    CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
        while(counter.get() != MY_INTS.length-1);
        return OptionalInt.empty();
    });

    for (int i = 0; i < MY_INTS.length; i++) {
        final int current = MY_INTS[i];
        executor.execute(() -> {
            int mapped = Main.doStuff(current);
            if(mapped > value){
                cf.complete(OptionalInt.of(mapped));
            } else {
                counter.incrementAndGet();
            }
        });
    }

    try {
        return cf.get();
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        return OptionalInt.empty();
    }
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    ExecutorService executor = Executors.newFixedThreadPool(10);
    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelFor(5678, executor));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    executor.shutdown();
    executor.awaitTermination(10, TimeUnit.SECONDS);
    executor.shutdownNow();
}
-----------------------
// Some lengthy computation
private static int doStuff(int i) {
    try { Thread.sleep(1000); } catch (InterruptedException e) { }
    return i;
}

public static OptionalInt findFirstGreaterThanStream(int value) {
    return IntStream
            .of(MY_INTS)
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}

public static OptionalInt findFirstGreaterThanFor(int value) {
    for (int i = 0; i < MY_INTS.length; i++) {
        int mapped = Main.doStuff(MY_INTS[i]);
        if(mapped > value){
            return OptionalInt.of(mapped);
        }
    }
    return OptionalInt.empty();
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanFor(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);
}
public static OptionalInt findFirstGreaterThanParallelStream(int value) {
    return IntStream
            .of(MY_INTS)
            .parallel()
            .map(Main::doStuff)
            .filter(x -> x > value)
            .findFirst();
}
public static OptionalInt findFirstGreaterThanParallelFor(int value, Executor executor) {
    AtomicInteger counter = new AtomicInteger(0);

    CompletableFuture<OptionalInt> cf = CompletableFuture.supplyAsync(() -> {
        while(counter.get() != MY_INTS.length-1);
        return OptionalInt.empty();
    });

    for (int i = 0; i < MY_INTS.length; i++) {
        final int current = MY_INTS[i];
        executor.execute(() -> {
            int mapped = Main.doStuff(current);
            if(mapped > value){
                cf.complete(OptionalInt.of(mapped));
            } else {
                counter.incrementAndGet();
            }
        });
    }

    try {
        return cf.get();
    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
        return OptionalInt.empty();
    }
}
public static void main(String[] args) {
    long begin;
    long end;

    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelStream(5));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    ExecutorService executor = Executors.newFixedThreadPool(10);
    begin = System.currentTimeMillis();
    System.out.println(findFirstGreaterThanParallelFor(5678, executor));
    end = System.currentTimeMillis();
    System.out.println(end-begin);

    executor.shutdown();
    executor.awaitTermination(10, TimeUnit.SECONDS);
    executor.shutdownNow();
}
-----------------------
Stream<String> stream = Files.lines(path); // lazy operation

List<String> result = stream.limit(N).collect(Collectors.toList()); // read and collect

Community Discussions

Trending Discussions on Java-Tutorial
  • Background removal from images with OpenCV in Android
  • java.home variable does not point to a JDK Source
  • VS Code &amp; WSL2 - specify Java Language Level to 1.8
  • Set target java version when build OpenCV with brew
  • Escaping references - how to legitimately update object data?
  • How to Run Java Module class (java 9 jigsaw project) with third party library (jar files)?
  • what does `PS` command mean in vscode terminal on windows?
  • Java Regex - capture string with single dollar, but not when it has two successive ones
  • Java regex (java.util.regex). Search for dollar sign
  • What is the basic structure of a Julia Program?
Trending Discussions on Java-Tutorial

QUESTION

Background removal from images with OpenCV in Android

Asked 2021-May-14 at 12:25

I want to remove image background with Open CV in Android. Code is working fine but output quality not as per expectation. I followed java documentation for code reference:

https://opencv-java-tutorials.readthedocs.io/en/latest/07-image-segmentation.html

Thanks

original Image Original image

My output Result Expected output Expected Output

My code snippet in Android:

private fun doBackgroundRemoval(frame: Mat): Mat? {
    // init
    val hsvImg = Mat()
    val hsvPlanes: List<Mat> = ArrayList()
    val thresholdImg = Mat()
    var thresh_type = Imgproc.THRESH_BINARY_INV
    thresh_type = Imgproc.THRESH_BINARY

    // threshold the image with the average hue value
    hsvImg.create(frame.size(), CvType.CV_8U)
    Imgproc.cvtColor(frame, hsvImg, Imgproc.COLOR_BGR2HSV)
    Core.split(hsvImg, hsvPlanes)

    // get the average hue value of the image
    val threshValue: Double = getHistAverage(hsvImg, hsvPlanes[0])
    threshold(hsvPlanes[0], thresholdImg, threshValue, 78.0, thresh_type)
    Imgproc.blur(thresholdImg, thresholdImg, Size(1.toDouble(), 1.toDouble()))

    val kernel1 =
        Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, Size(11.toDouble(), 11.toDouble()))
    val kernel2 = Mat.ones(3, 3, CvType.CV_8U)
    // dilate to fill gaps, erode to smooth edges
    Imgproc.dilate(thresholdImg, thresholdImg, kernel1, Point(-1.toDouble(), -1.toDouble()), 1)
    Imgproc.erode(thresholdImg, thresholdImg, kernel2, Point(-1.toDouble(), -1.toDouble()), 7)
    threshold(thresholdImg, thresholdImg, threshValue, 255.0, Imgproc.THRESH_BINARY_INV)

    // create the new image
    val foreground = Mat(
        frame.size(), CvType.CV_8UC3, Scalar(
            255.toDouble(),
            255.toDouble(),
            255.toDouble()
        )
    )
    frame.copyTo(foreground, thresholdImg)
    val img_bitmap =
        Bitmap.createBitmap(foreground!!.cols(), foreground!!.rows(), Bitmap.Config.ARGB_8888)
    Utils.matToBitmap(foreground!!, img_bitmap)
    imageView.setImageBitmap(img_bitmap)

    return foreground
}

ANSWER

Answered 2021-May-11 at 02:14

The task, as you have seen, is not trivial at all. OpenCV has a segmentation algorithm called "GrabCut" that tries to solve this particular problem. The algorithm is pretty good at classifying background and foreground pixels, however it needs very specific information to work. It can operate on two modes:

  • 1st Mode (Mask Mode): Using a Binary Mask (same size as the original input) where 100% definite background pixels are marked, as well as 100% definite foreground pixels. You don't have to mark every pixel on the image, just a region where you are sure the algorithm will find either class of pixels.

  • 2nd Mode (Foreground ROI): Using a bounding box that encloses 100% definite foreground pixels.

Now, I use the notation "100% definitive" to label those pixels you are 100% sure they correspond to either the background of foreground. The algorithm classifies the pixels in four possible classes: "Definite Background", "Probable Background", "Definite Foreground" and "Probable Foreground". It will predict both Probable Background and Probable Foreground pixels, but it needs a priori information of where to find at least "Definitive Foreground" pixels.

With that said, we can use GrabCut in its 2nd mode (Rectangle ROI) to try an segment the input image . We can try and get a first, rough, binary mask of the input. This will mark where we are sure the algorithm can find foreground pixels. We will feed this rough mask to the algorithm and check out the results. Now, the method is not easy and its automation not straightforward, there's some manual information we will set that work particularly well for this input image. I don't know the Java implementation of OpenCV, so I'm giving you the solution for Python. Hopefully you will be able to port it. This is the general outline of the algorithm:

  1. Get a first rough mask of the foreground object via thresholding
  2. Detect contours on the rough mask to retrieve a bounding rectangle
  3. The bounding rectangle will serve as input ROI for the GrabCut algorithm
  4. Set the parameters needed for the GrabCut algorithm
  5. Clean the segmentation mask obtained by GrabCut
  6. Use the segmentation mask to finally segment the foreground object

This is the code:

# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "backgroundTest.png"

# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# (Optional) Deep copy for results:
inputImageCopy = inputImage.copy()

# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)

# Adaptive Thresholding
windowSize = 31
windowConstant = 11
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)

The first step is to get the rough foreground mask using Adaptive Thresholding. Here, I've use the ADAPTIVE_THRESH_MEAN_C method, where the (local) threshold value is the mean of a neighborhood area on the input image. This yields the following image:

It's pretty rough, right? We can clean this up a little bit using some morphology. I use a Closing with a rectangular kernel of size 3 x 3 and 10 iterations to join the big blobs of white pixels. I've wrapped the OpenCV functions inside custom functions that save me the typing of some lines. These helper functions are presented at the end of this post. For now, this step is as follows:

# Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 10 iterations
binaryImage = morphoOperation(binaryImage, 3, 10, "Closing")

This is the rough mask after filtering:

A little bit better. Ok, we can now search for the bounding box of the biggest contour. A search for the outer contours via cv2.RETR_EXTERNAL will suffice for this example, as we can safely ignore children contours, like this:

# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# This list will store the target bounding box
maskRect = []

Additionally, let's get a list ready where we will store the target bounding rectangle. Let's now search on the detected contours. I've also implemented an area filter in case some noise is present, so the pixels below a certain area threshold are ignored:

# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):

    # Get blob area:
    currentArea = cv2.contourArea(c)

    # Get the bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Set a minimum area
    minArea = 1000

    # Look for the target contour:
    if currentArea > minArea:

        # Found the target bounding rectangle:
        maskRect = boundRect

        # (Optional) Draw the rectangle on the input image:
        # Get the dimensions of the bounding rect:
        rectX = boundRect[0]
        rectY = boundRect[1]
        rectWidth = boundRect[2]
        rectHeight = boundRect[3]

        # (Optional) Set color and draw:
        color = (0, 0, 255)
        cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                    (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2 )
        
        # (Optional) Show image:
        cv2.imshow("Bounding Rectangle", inputImageCopy)
        cv2.waitKey(0)

Optionally you can draw the bounding box found by the algorithm. This is the resulting image:

It is looking good. Note that some obvious background pixels are also enclosed by the ROI. GrabCut will try to re-classify these pixels into their proper class, i.e., "Definitive Background". Alright, let's prepare the data for GrabCut:

# Create mask for Grab n Cut,
# The mask is a uint8 type, same dimensions as
# original input:
mask = np.zeros(inputImage.shape[:2], np.uint8)

# Grab n Cut needs two empty matrices of
# Float type (64 bits) and size 1 (rows) x 65 (columns):
bgModel = np.zeros((1, 65), np.float64)
fgModel = np.zeros((1, 65), np.float64)

We need to prepare three matrices/numpy arrays/whatever data type is used to represent images in Java. The first is where the segmentation mask obtained by GrabCut will be stored. This mask will have values from 0 to 3 to denote the class of each pixel on the original input. The bgModel and fgModel matrices are used internally by the algorithm to store the statistical model of the foreground and background. Be aware that both of these matrices are float matrices. Lastly, GrabCut is an iterative algorithm. It will run for n iterations. Ok, Let's run GrabCut:

# Run Grab n Cut on INIT_WITH_RECT mode:
grabCutIterations = 5
mask, bgModel, fgModel = cv2.grabCut(inputImage, mask, maskRect, bgModel, fgModel, grabCutIterations, mode=cv2.GC_INIT_WITH_RECT)

Ok, the classification is done. You can try and convert mask to an (image) visible type to check out the labels of each pixel. This is optional, but should you wish to do so, you'd get 4 matrices. Each one for each class. For example, for the "Definitive Background" class, GrabCut found these are the pixels belonging to such class (in white):

The pixels belonging to the "Probable Background" class are these:

That's pretty good, huh? Here are the pixels belonging to the "Probable Foreground" class:

Very nice. Let's create the final segmentation mask, because mask is not an image, it is just an array containing labels for each pixel. We will use the Definite Background and Probable Background pixels to set the final mask, we then can "normalize" the data range and convert it to uint8 to obtain an actual image

# Set all definite background (0) and probable background pixels (2)
# to 0 while definite foreground and probable foreground pixels are
# set to 1
outputMask = np.where((mask == cv2.GC_BGD) | (mask == cv2.GC_PR_BGD), 0, 1)

# Scale the mask from the range [0, 1] to [0, 255]
outputMask = (outputMask * 255).astype("uint8")

This is the actual segmentation mask:

Alright, we can clean a little bit this image, because there are some small holes produced by misclassifying foreground pixels as background pixels. Let's apply just another morphological closing, this time using 5 iterations:

# (Optional) Apply a morphological closing with:
# Rectangular SE size 3 x 3 and 5 iterations:
outputMask = morphoOperation(outputMask, 3, 5, "Closing")

Finally, use this outputMask in an AND with the original image to produce the final segmented result:

# Apply a bitwise AND to the image using our mask generated by
# GrabCut to generate the final output image:
segmentedImage = cv2.bitwise_and(inputImage, inputImage, mask=outputMask)

cv2.imshow("Segmented Image", segmentedImage)
cv2.waitKey(0)

This is the final result:

If you need transparency on this image, is very straightforward to use outputMask as alpha channel. This is the helper function I used earlier:

# Applies a morpho operation:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
    # Get the structuring element:
    morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
    # Perform Operation:
    if opString == "Closing":
        op = cv2.MORPH_CLOSE
    else:
        print("Morpho Operation not defined!")
        return None

    outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

    return outImage

Source https://stackoverflow.com/questions/67400380

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install Java-Tutorial

You can download it from GitHub.
You can use Java-Tutorial like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Java-Tutorial component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.