CoreMLHelpers | little easier to work with Core ML | iOS library
kandi X-RAY | CoreMLHelpers Summary
kandi X-RAY | CoreMLHelpers Summary
Types and functions that make it a little easier to work with Core ML in Swift.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of CoreMLHelpers
CoreMLHelpers Key Features
CoreMLHelpers Examples and Code Snippets
Community Discussions
Trending Discussions on CoreMLHelpers
QUESTION
I have two CoreML
MLMModels
(converted from .pb
).
The first model outputs a Float32 3 × 512 × 512
MLMultiArray
, which basically describes an image.
The second model input is a Float32 1 × 360 × 640 × 3
MLMultiArray
, which is also an image but with a different size.
I know that in theory, I can convert the second model input into an image, and then convert the first model output to an image (post-prediction), resize it, and feed the second model, but It feels not very efficient and there is already a significant delay caused by the models, so I'm trying to improve performance.
Is it possible to "resize"/"reshape"/"transposed" the first model output to match the second model input? I'm using https://github.com/hollance/CoreMLHelpers (by the amazing Matthijs Hollemans) helpers, but I don't really understand how to do it without damaging the data and keeping it as efficient as possible.
Thanks!
...ANSWER
Answered 2021-Jun-01 at 19:35You don't have to turn them into images. Some options for using MLMultiArrays instead of images:
You could take the 512x512 output from the first model and chop off a portion to make it 360x512, and then pad the other dimension to make it 360x640. But that's probably not what you want. In case it is, you'll have to write to code for this yourself.
You can also resize the 512x512 output to 360x640 by hand. To do this you will need to implement a suitable resizing option yourself (probably bilinear interpolation) or convert the data so you can use OpenCV or the vImage framework.
Let the model do the above. Add a ResizeBilinearLayer to the model, followed by a PermuteLayer or TransposeLayer to change the order of the dimensions. Now the image will be resized to 360x640 pixels, and the output of the first model is 1x360x640x3. This is easiest if you add these operation to the original model and then let coremltools convert them to the appropriate Core ML layers.
QUESTION
In combination with Core ML
, I am trying to show a RGBA byte array
in an UIImage
using the following code:
ANSWER
Answered 2018-Apr-05 at 07:03I am trying to show a
RGB
byte array
Then kCGImageAlphaPremultipliedLast
is incorrect. Try to switch to kCGImageAlphaNone
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install CoreMLHelpers
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page