Explore all Translation open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Translation

CopyTranslator

CopyTranslator v10.0.0 破晓 beta.5

translation

v6.0.7

OpenCC

Version 1.1.2

LibreTranslate

1.2.7

go-i18n

v2.1.2

Popular Libraries in Translation

How-To-Ask-Questions-The-Smart-Way

by ryanhanwu doticonjavascriptdoticon

star image 17312 doticonMIT

本文原文由知名 Hacker Eric S. Raymond 所撰寫,教你如何正確的提出技術問題並獲得你滿意的答案。

CopyTranslator

by CopyTranslator doticontypescriptdoticon

star image 13072 doticonNOASSERTION

Foreign language reading and translation assistant based on copy and translate.

translation

by symfony doticonphpdoticon

star image 6233 doticonMIT

The Translation component provides tools to internationalize your application.

nmt

by tensorflow doticonpythondoticon

star image 5896 doticonApache-2.0

TensorFlow Neural Machine Translation Tutorial

OpenCC

by BYVoid doticonc++doticon

star image 5582 doticonApache-2.0

Conversion between Traditional and Simplified Chinese

seq2seq

by google doticonpythondoticon

star image 5407 doticonApache-2.0

A general-purpose encoder-decoder framework for Tensorflow

程序员工作中常见的英语词汇

jshistory-cn

by doodlewind doticontypescriptdoticon

star image 3504 doticon

🇨🇳 《JavaScript 二十年》中文版

LearnOpenGL-CN

by LearnOpenGL-CN doticoncssdoticon

star image 3483 doticon

http://learnopengl.com 系列教程的简体中文翻译

Trending New libraries in Translation

jshistory-cn

by doodlewind doticontypescriptdoticon

star image 3504 doticon

🇨🇳 《JavaScript 二十年》中文版

LibreTranslate

by LibreTranslate doticonpythondoticon

star image 2173 doticonAGPL-3.0

Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup.

x-transformers

by lucidrains doticonpythondoticon

star image 1541 doticonMIT

A simple but complete full-attention transformer with a set of promising experimental features from various papers

Traduzir-paginas-web

by FilipePS doticonjavascriptdoticon

star image 855 doticonMPL-2.0

Translate your page in real time using Google or Yandex

awesome-fast-attention

by Separius doticonpythondoticon

star image 720 doticonGPL-3.0

list of efficient attention modules

docs-cn

by vitejs doticonjavascriptdoticon

star image 645 doticon

Chinese translation of vitejs.dev

deep-translator

by nidhaloff doticonpythondoticon

star image 555 doticonMIT

A flexible free and unlimited python tool to translate between different languages in a simple way using multiple translators.

lingva-translate

by TheDavidDelta doticontypescriptdoticon

star image 479 doticonAGPL-3.0

Alternative front-end for Google Translate

EasyNMT

by UKPLab doticonpythondoticon

star image 424 doticonApache-2.0

Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

Top Authors in Translation

1

javascript-tutorial

14 Libraries

star icon570

2

gatsbyjs

11 Libraries

star icon253

3

mozilla

10 Libraries

star icon113

4

MIUICzech-Slovak

8 Libraries

star icon20

5

facebookresearch

6 Libraries

star icon1748

6

php

6 Libraries

star icon120

7

redmaner

5 Libraries

star icon15

8

Ludeon

5 Libraries

star icon112

9

DeepLearnXMU

5 Libraries

star icon63

10

requests

5 Libraries

star icon78

1

14 Libraries

star icon570

2

11 Libraries

star icon253

3

10 Libraries

star icon113

4

8 Libraries

star icon20

5

6 Libraries

star icon1748

6

6 Libraries

star icon120

7

5 Libraries

star icon15

8

5 Libraries

star icon112

9

5 Libraries

star icon63

10

5 Libraries

star icon78

Trending Kits in Translation

DESCRIPTION:


Language Translation with T5 Transformer


The Language Translation endeavor involves a natural language processing challenge focused on utilizing the T5 Transformer model to achieve text translation between different languages. Developed by Google Research, T5 (Text-to-Text Transfer Transformer) is a highly adaptable model that proves valuable across a range of text-based tasks, including language translation.


To accomplish this project, the Hugging Face's transformers library is harnessed, offering pre-trained transformer models, tokenizers, and utilities essential for seamless natural language processing tasks. Within this particular implementation, the T5 model takes center stage, demonstrating its prowess in the art of translation.

DEPENDEND LIBRARIES USED:


Install transformers using pip:


transformers: Developed by Hugging Face, this library offers pre-trained models designed for Natural Language Processing tasks, such as translation. It also comes equipped with tokenizers and utilities to facilitate seamless interaction with these models.

Solution Screenshot:

DESCRIPTION

The provided Python code demonstrates a real-time speech-to-text translation system using the SpeechRecognition and Googletrans libraries. The purpose of this code is to convert spoken language into written text and then translate it into the desired target language.


The code consists of two main functions:

  1. speech_to_text(): This function utilizes the SpeechRecognition library to capture audio input from the default microphone. It then attempts to convert the speech to text using the Google Web Speech API. If successful, the recognized text is printed to the console. If there is an issue with speech recognition (e.g., when the input speech is not clear or recognizable), appropriate error messages are displayed.
  2. translate_text(text, target_language='ta'): In this function, the Googletrans library is used to translate the input text into the target language. By default, the target language is set to Tamil ('ta'), but you can specify any other language code as needed. The translated text is printed to the console, and it is also returned for further use.


The code demonstrates a practical implementation of real-time speech recognition and translation, which could have various applications, such as language learning, multilingual communication, and voice-controlled systems.

Note: Ensure that you have the required dependencies, such as SpeechRecognition and Googletrans, installed in your Python environment to run the code successfully.

DEPENDENT LIBRARIES

GITHUB REPOSITORY LINK

AkashS333/Real-Time-Speech-to-Text-Translation-in-Python-using-Speech-Recognition (github.com)


SOLUTION SOURCE SCREENSHOT


DESCRIPTION


This Python script demonstrates an AI-Powered Text-to-Speech Translator using the `gTTS` (Google Text-to-Speech) library and `googletrans` for translation. The purpose of this code is to take user input in English and then translate and convert it to speech in the target language, which is set to Tamil (ta) by default. You can also change the target language.


Here's a breakdown of the code:


1. The script imports necessary libraries:

  - `gTTS`: A library to generate speech from text using Google Text-to-Speech API.

  - `googletrans`: A library to interact with Google Translate for language translation.

  - `os`: Used for handling the operating system commands.


2. The function `text_to_speech_with_translation` takes two parameters:

  - `text`: The English text input that the user wants to translate and convert to speech.

  - `target_language`: The language code for the target language, set to Tamil (ta) by default.


3. Inside the function:

  - The text is translated to the target language (Tamil) using `googletrans`.

  - The translated text is saved in the variable `translated_text`.

  - The translated text is printed to the console to display the translation.


4. The translated text is then passed to the `gTTS` library to generate speech in the target language.

  - The `gTTS` library generates an audio file in MP3 format using Google's Text-to-Speech API.

  - The audio file is saved as "translated_speech.mp3".


5. Finally, the script uses the `os.system` command to play the generated audio file.

  - On Windows, the "start" command opens the default media player to play the speech.


The user is prompted to enter the English text they want to translate and hear in the target language. When executed, the script performs the translation, converts the text to speech, and plays it using the default media player. The code showcases how AI-powered language translation and text-to-speech technology can be utilized to enhance cross-language communication and accessibility.

DEPANDENT LIBRARIES

GITHUB REPOSITORY LINK:

https://github.com/amirthap03/AI-Powered-Text-to-Speech-Translator-with-GoogleTranslateTTS

SOLUTION SOURCE SCREENSHOT:


Trending Discussions on Translation

Padding scipy affine_transform output to show non-overlapping regions of transformed images

String(localized:) has no separate key and value?

Is there any rule about why is the redefinition of the enumerator ill-formed?

How to set schema_translate_map in SQLAlchemy object in Flask app

How To Scale The Contents Of A UIView To Fit A Destination Rectangle Whilst Maintaining The Aspect Ratio?

Data path "" must NOT have additional properties(extractCss) in Angular 13 while upgrading project

Possible ODR-violations when using a constexpr variable in the definition of an inline function (in C++14)

How to work with classes extended from EnumClass and generated by build_runner in Dart/Flutter?

Python best way to 'swap' words (multiple characters) in a string?

ValueError: None values not supported. Code working properly on CPU/GPU but not on TPU

QUESTION

Padding scipy affine_transform output to show non-overlapping regions of transformed images

Asked 2022-Mar-28 at 11:54

I have source (src) image(s) I wish to align to a destination (dst) image using an Affine Transformation whilst retaining the full extent of both images during alignment (even the non-overlapping areas).

I am already able to calculate the Affine Transformation rotation and offset matrix, which I feed to scipy.ndimage.interpolate.affine_transform to recover the dst-aligned src image.

The problem is that, when the images are not fuly overlapping, the resultant image is cropped to only the common footprint of the two images. What I need is the full extent of both images, placed on the same pixel coordinate system. This question is almost a duplicate of this one - and the excellent answer and repository there provides this functionality for OpenCV transformations. I unfortunately need this for scipy's implementation.

Much too late, after repeatedly hitting a brick wall trying to translate the above question's answer to scipy, I came across this issue and subsequently followed to this question. The latter question did give some insight into the wonderful world of scipy's affine transformation, but I have as yet been unable to crack my particular needs.

The transformations from src to dst can have translations and rotation. I can get translations only working (an example is shown below) and I can get rotations only working (largely hacking around the below and taking inspiration from the use of the reshape argument in scipy.ndimage.interpolation.rotate). However, I am getting thoroughly lost combining the two. I have tried to calculate what should be the correct offset (see this question's answers again), but I can't get it working in all scenarios.

Translation-only working example of padded affine transformation, which follows largely this repo, explained in this answer:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106

e.g.:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107

gives:

enter image description here

With a zoom in showing the aligned in the padded images:

enter image description here

I require the full extent of the src and dst images aligned on the same pixel coordinates, with both rotations and translations.

Any help is greatly appreciated!

ANSWER

Answered 2022-Mar-22 at 16:44

If you have two images that are similar (or the same) and you want to align them, you can do it using both functions rotate and shift :

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108

You need to find first the difference of angle between the two images angle_to_rotate, having that you apply a rotation to src:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110

With reshape=True you avoid losing information from your original src matrix, and it pads the result so the image could be translated around the 0,0 indexes. You can calculate this translation as it is (x*cos(angle),y*sin(angle) where x and y are the dimensions of the image, but it probably won't matter.

Now you will need to translate the image to the source, for doing that you can use the shift function:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111

In this case there is no reshape (because otherwise you wouldn't have any real translation) so if the image was not previously padded some information will be lost.

But you can do some padding with

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111np.pad(src, number, mode='constant')
112

To calculate distance_x and distance_y you will need to find a point that serves you as a reference between the rotated_src and the destination, then just calculate the distance in the x and y axis.

Summary

  1. Make some padding in src, and dst
  2. Find the angular distance between them.
  3. Rotate src with scipy.ndimage.rotate using reshape=True
  4. Find the horizontal and vertical distance distance_x, distance_y between the rotated image and dst
  5. Translate your 'rotated_src' with scipy.ndimage.shift

Code

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111np.pad(src, number, mode='constant')
112from scipy.ndimage import rotate, shift
113import matplotlib.pyplot as plt
114import numpy as np
115

First we make the destination image:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111np.pad(src, number, mode='constant')
112from scipy.ndimage import rotate, shift
113import matplotlib.pyplot as plt
114import numpy as np
115# make and plot dest
116dst = np.ones([40,20])
117dst = np.pad(dst,10)
118dst[17,[14,24]]=4
119dst[27,14:25]=4
120dst[26,[14,25]]=4
121rotated_dst = rotate(dst, 20, order=1)
122
123plt.imshow(dst) # plot it
124plt.imshow(rotated_dst)
125plt.show()
126

We make the Source image:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111np.pad(src, number, mode='constant')
112from scipy.ndimage import rotate, shift
113import matplotlib.pyplot as plt
114import numpy as np
115# make and plot dest
116dst = np.ones([40,20])
117dst = np.pad(dst,10)
118dst[17,[14,24]]=4
119dst[27,14:25]=4
120dst[26,[14,25]]=4
121rotated_dst = rotate(dst, 20, order=1)
122
123plt.imshow(dst) # plot it
124plt.imshow(rotated_dst)
125plt.show()
126# make_src image and plot it
127src = np.zeros([40,20])
128src = np.pad(src,10)
129src[0:20,0:20]=1
130src[7,[4,14]]=4
131src[17,4:15]=4
132src[16,[4,15]]=4
133plt.imshow(src)
134plt.show()
135

Then we align the src to the destination:

1from scipy.ndimage import rotate, affine_transform
2import numpy as np
3import matplotlib.pyplot as plt
4
5nblob = 50
6shape = (200, 100)
7buffered_shape = (300, 200)  # buffer for rotation and translation
8
9
10def affine_test(angle=0, translate=(0, 0)):
11    np.random.seed(42)
12    # Maxiumum translation allowed is half difference between shape and buffered_shape
13
14    # Generate a buffered_shape-sized base image with random blobs
15    base = np.zeros(buffered_shape, dtype=np.float32)
16    random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False)
17    i = random_locs[:nblob]
18    j = random_locs[nblob:]
19    for k, (_i, _j) in enumerate(zip(i, j)):
20        # Use different values, just to make it easier to distinguish blobs
21        base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10
22
23    # Impose a rotation and translation on source
24    src = rotate(base, angle, reshape=False, order=1, mode="constant")
25    bsc = (np.array(buffered_shape) / 2).astype(int)
26    sc = (np.array(shape) / 2).astype(int)
27    src = src[
28        bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0],
29        bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1],
30    ]
31    # Cut-out destination from the centre of the base image
32    dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]]
33
34    src_y, src_x = src.shape
35
36    def get_matrix_offset(centre, angle, scale):
37        """Follows OpenCV.getRotationMatrix2D"""
38        angle = angle * np.pi / 180
39        alpha = scale * np.cos(angle)
40        beta = scale * np.sin(angle)
41        return (
42            np.array([[alpha, beta], [-beta, alpha]]),
43            np.array(
44                [
45                    (1 - alpha) * centre[0] - beta * centre[1],
46                    beta * centre[0] + (1 - alpha) * centre[1],
47                ]
48            ),
49        )
50    # Obtain the rotation matrix and offset that describes the transformation
51    # between src and dst
52    matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1)
53    offset = offset - translate
54
55    # Determine the outer bounds of the new image
56    lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]])
57    transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1)
58
59    # Find min and max bounds of the transformed image
60    min_x = np.floor(np.min(transf_lin_pts[0])).astype(int)
61    min_y = np.floor(np.min(transf_lin_pts[1])).astype(int)
62    max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int)
63    max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int)
64
65    # Add translation to the transformation matrix to shift to positive values
66    anchor_x, anchor_y = 0, 0
67    if min_x < 0:
68        anchor_x = -min_x
69    if min_y < 0:
70        anchor_y = -min_y
71    shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x])
72
73    # Create padded destination image
74    dst_h, dst_w = dst.shape[:2]
75    pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w]
76    dst_padded = np.pad(
77        dst,
78        ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])),
79        "constant",
80        constant_values=-1,
81    )
82    dst_pad_h, dst_pad_w = dst_padded.shape
83
84    # Create the aligned and padded source image
85    source_aligned = affine_transform(
86        src,
87        matrix.T,
88        offset=shifted_offset,
89        output_shape=(dst_pad_h, dst_pad_w),
90        order=3,
91        mode="constant",
92        cval=-1,
93    )
94
95    # Plot the images
96    fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True)
97    axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob)
98    axes[0].set_title("Source")
99    axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob)
100    axes[1].set_title("Dest")
101    axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob)
102    axes[2].set_title("Source aligned to Dest padded")
103    axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob)
104    axes[3].set_title("Dest padded")
105    plt.show()
106affine_test(0, (-20, 40))
107from scipy.ndimage import rotate, shift
108angle_to_rotate = 25
109rotated_src = rotate(src, angle_to_rotate , reshape=True, order=1, mode="constant")
110rot_translated_src = shift(rotated_src , [distance_x, distance_y])
111np.pad(src, number, mode='constant')
112from scipy.ndimage import rotate, shift
113import matplotlib.pyplot as plt
114import numpy as np
115# make and plot dest
116dst = np.ones([40,20])
117dst = np.pad(dst,10)
118dst[17,[14,24]]=4
119dst[27,14:25]=4
120dst[26,[14,25]]=4
121rotated_dst = rotate(dst, 20, order=1)
122
123plt.imshow(dst) # plot it
124plt.imshow(rotated_dst)
125plt.show()
126# make_src image and plot it
127src = np.zeros([40,20])
128src = np.pad(src,10)
129src[0:20,0:20]=1
130src[7,[4,14]]=4
131src[17,4:15]=4
132src[16,[4,15]]=4
133plt.imshow(src)
134plt.show()
135rotated_src = rotate(src, 20, order=1) # find the angle 20, reshape true is by default
136plt.imshow(rotated_src)
137plt.show()
138distance_y = 8 # find this distances from rotated_src and dst
139distance_x = 12 # use any visual reference or even the corners
140translated_src = shift(rotated_src, [distance_y,distance_x])
141plt.imshow(translated_src)
142plt.show()
143

pd: If you find problems to find the angle and the distances in a programmatic way, please leave a comment providing a bit more of insight of what can be used as a reference that could be for example the frame of the image or some image features / data)

Source https://stackoverflow.com/questions/71516584

QUESTION

String(localized:) has no separate key and value?

Asked 2022-Mar-05 at 11:32

New in iOS 15, we are invited to use this String initializer method to make localizable strings in our Swift code:

1init(localized keyAndValue: String.LocalizationValue, 
2    table: String? = nil, bundle: Bundle? = nil, 
3    locale: Locale = .current, comment: StaticString? = nil)
4

The trouble is that the first parameter is, as the internal name suggests, used for both the key and the value. You can see that from this localized French strings file:

1init(localized keyAndValue: String.LocalizationValue, 
2    table: String? = nil, bundle: Bundle? = nil, 
3    locale: Locale = .current, comment: StaticString? = nil)
4/* Alert message: Report a tap */
5"You tapped me!" = "Vous m'avez tapé!";
6

That resulted from my saying

1init(localized keyAndValue: String.LocalizationValue, 
2    table: String? = nil, bundle: Bundle? = nil, 
3    locale: Locale = .current, comment: StaticString? = nil)
4/* Alert message: Report a tap */
5"You tapped me!" = "Vous m'avez tapé!";
6String(localized:"You tapped me!", comment: "Alert message: Report a tap")
7

and localizing for French.

That's totally wrong! This is supposed to be a list of key–value pairs; we shouldn't be using the English user-facing text as a key.

For one thing, if we now change the English text in our String(localized:comment:) call, our French translation will break. Also, we would be unable to have different French translations for the same English text used in different contexts.

What are we supposed to do about this?

ANSWER

Answered 2021-Sep-17 at 17:45

I regard this as a major bug in String(localizable:). If we were using NSLocalizedString, we would have individual key: and value: parameters. String(localizable:) needs that.

I can think of two workarounds. One is: don't use String(localizable:). Just keep on using NSLocalizedString.

The other is to localize explicitly for English. Instead of entering the English user-facing text as the localized: parameter, enter a key string. Then, to prevent the keys from appearing in the user interface, export the English localization and "translate" the keys into the desired English user-facing text. Now import the localization to generate the correct English .strings files.

(If your development language isn't English, substitute the development language into those instructions.)

Now when you export a different localization, such as French, the <trans-unit> element's id value is the key, to which the translator pays no attention, and the <source> is the English, which the translator duly translates.

To change the English user-facing text later on, edit the English Localizable.strings file — not the code. Nothing will break because the key remains constant.

Source https://stackoverflow.com/questions/69227554

QUESTION

Is there any rule about why is the redefinition of the enumerator ill-formed?

Asked 2022-Feb-22 at 07:03

Consider this example

1enum class A{
2    a = 0,
3    a = 1
4};
5

The compilers will report an error, which is the "redefinition of enumerator 'a'". However, [basic.def.odr#1] does not have any requirement for the enumerator

No translation unit shall contain more than one definition of any variable, function, class type, enumeration type, template, default argument for a parameter (for a function in a given scope), or default template argument.

I wonder which normative rule, in the standard, is restricting that?

ANSWER

Answered 2022-Feb-22 at 07:03
Original answer

Yes, as of now, the One Definition Rule in the C++ standard doesn't include enumerators.

However, the "the second a is a redeclaration of the first a" explanation doesn't work too.
From [dcl.enum#nt:enumerator-list] we can know that an enumerator-list is a list of enumerator-definition, so they're all definitions.

1enum class A{
2    a = 0,
3    a = 1
4};
5enumerator-list:
6    enumerator-definition
7    enumerator-list , enumerator-definition
8

Why isn't enumerator included in the one definition rule? That's likely an oversight on the standard committee's part. Considering that in C, enumerators are prohibited from redefinition.

From the draft of C99, Section 6.7:

5

A declaration specifies the interpretation and attributes of a set of identifiers. A definition of an identifier is a declaration for that identifier that:
— for an object, causes storage to be reserved for that object;
— for a function, includes the function body;101)
— for an enumeration constant or typedef name, is the (only) declaration of the identifier.

From Section 6.7.2.2 we can see an enumerator is an enumeration-constant:

1enum class A{
2    a = 0,
3    a = 1
4};
5enumerator-list:
6    enumerator-definition
7    enumerator-list , enumerator-definition
8enumerator:
9    enumeration-constant
10    enumeration-constant = constant-expression
11

And from 6.7.2.2 one can also infer that all enumerator in a enumerator-list will always be not only declared but also defined.

3

The identifiers in an enumerator list are declared as constants that have type int and may appear wherever such are permitted. An enumerator with = defines its enumeration constant as the value of the constant expression. If the first enumerator has no =, the value of its enumeration constant is 0. Each subsequent enumerator with no = defines its enumeration constant as the value of the constant expression obtained by adding 1 to the value of the previous enumeration constant. (The use of enumerators with = may produce enumeration constants with values that duplicate other values in the same enumeration.) The enumerators of an enumeration are also known as its members.

So in C, you can't define an enumerator with the same identifier for more than one time, because if you can, it will no longer be the only declaration of the identifier, which makes it an invalid definition according to Section 6.7.

The behaviour in C might be why almost all C++ compiler prohibits redefinition of enumerator, and it is likely the intended or expected behaviour of C++ too.

Update

Update 2022-02-16: I've submitted an issue regarding this question following the procedure detailed in https://isocpp.org/std/submit-issue. It's been accepted and is now Issue 2530.

Source https://stackoverflow.com/questions/68602375

QUESTION

How to set schema_translate_map in SQLAlchemy object in Flask app

Asked 2022-Feb-19 at 23:10

My app.py file

1from flask import Flask
2from flask_sqlalchemy import SQLAlchemy
3
4from flask import Flask
5from flask_sqlalchemy import SQLAlchemy
6
7app = Flask(__name__)
8app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db'
9db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
10

One of my model classes, where I imported db

1from flask import Flask
2from flask_sqlalchemy import SQLAlchemy
3
4from flask import Flask
5from flask_sqlalchemy import SQLAlchemy
6
7app = Flask(__name__)
8app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db'
9db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
10from app import db
11Base = declarative_base()
12
13# User class
14class User(db.Model, Base):
15  id = db.Column(db.Integer, primary_key=True)
16  username = db.Column(db.String(80), unique=True, nullable=False)
17  email = db.Column(db.String(120), unique=True, nullable=False)
18
19  def __repr__(self):
20    return '&lt;User %r&gt;' % self.username
21
22  def get_user_by_id(self, id):
23    return self.query.get(id)
24

My database has the same set of tables in different schema (multi-tenancy) and there I need to select the schema as per the request initiated by a particular tenant on the fly by using before_request (grabbing tenant_id from subdomain URL).

I found Postgres provides selecting the schema name on fly by using schema_translate_map ref. https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names and that is under execution_options https://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.Connection.execution_options

In my above code snippet where you see db = SQLAlchemy(app), as per official documentation, two parameters can be set in SQLAlchemy objct creation and they are - session_options and engine_options, but no execution_options ref. https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy

But how do I set schema_translate_map setting when I am creating an object of SQLAlchemy

I tried this -

1from flask import Flask
2from flask_sqlalchemy import SQLAlchemy
3
4from flask import Flask
5from flask_sqlalchemy import SQLAlchemy
6
7app = Flask(__name__)
8app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db'
9db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
10from app import db
11Base = declarative_base()
12
13# User class
14class User(db.Model, Base):
15  id = db.Column(db.Integer, primary_key=True)
16  username = db.Column(db.String(80), unique=True, nullable=False)
17  email = db.Column(db.String(120), unique=True, nullable=False)
18
19  def __repr__(self):
20    return '&lt;User %r&gt;' % self.username
21
22  def get_user_by_id(self, id):
23    return self.query.get(id)
24db = SQLAlchemy(app, 
25  session_options={
26    &quot;autocommit&quot;: True, 
27    &quot;autoflush&quot;: False, 
28    &quot;schema_translate_map&quot;: {
29      None: &quot;public&quot;
30    }
31  }
32)
33

But obviously, it did not work, because schema_translate_map is under execution_options as mentioned here https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names

Anyone has an idea, how to set schema_translate_map at the time of creating SQLAlchemy object.

My goal is to set it dynamically for each request. I want to control it from this centralized place, rather than going in each model file and specifying it when I execute queries.

I am aware of doing this differently as suggested here https://stackoverflow.com/a/56490246/1560470 but my need is to set somewhere around db = SQLAlchemy(app) in app.py file only. Then after I import db in all my model classes (as shown above) and in those model classes, all queries execute under the selected schema.

ANSWER

Answered 2022-Feb-19 at 23:10

I found a way to accomplish it. This is what needed

1from flask import Flask
2from flask_sqlalchemy import SQLAlchemy
3
4from flask import Flask
5from flask_sqlalchemy import SQLAlchemy
6
7app = Flask(__name__)
8app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db'
9db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy
10from app import db
11Base = declarative_base()
12
13# User class
14class User(db.Model, Base):
15  id = db.Column(db.Integer, primary_key=True)
16  username = db.Column(db.String(80), unique=True, nullable=False)
17  email = db.Column(db.String(120), unique=True, nullable=False)
18
19  def __repr__(self):
20    return '&lt;User %r&gt;' % self.username
21
22  def get_user_by_id(self, id):
23    return self.query.get(id)
24db = SQLAlchemy(app, 
25  session_options={
26    &quot;autocommit&quot;: True, 
27    &quot;autoflush&quot;: False, 
28    &quot;schema_translate_map&quot;: {
29      None: &quot;public&quot;
30    }
31  }
32)
33db = SQLAlchemy(app, 
34  session_options={
35    &quot;autocommit&quot;: True, 
36    &quot;autoflush&quot;: False
37  },
38  engine_options={
39    &quot;execution_options&quot;:
40      {
41        &quot;schema_translate_map&quot;: {
42          None: &quot;public&quot;,
43          &quot;abc&quot;: &quot;xyz&quot;
44        }
45      }
46  }
47)
48

Source https://stackoverflow.com/questions/71099132

QUESTION

How To Scale The Contents Of A UIView To Fit A Destination Rectangle Whilst Maintaining The Aspect Ratio?

Asked 2022-Feb-16 at 15:42

I am trying to solve a problem without success and am hoping someone could help.

I have looked for similar posts but haven't been able to find anything which solves my problem.

My Scenario is as follows: I have a UIView on which a number of other UIViews can be placed. These can be moved, scaled and rotated using gesture recognisers (There is no issue here). The User is able to change the Aspect Ratio of the Main View (the Canvas) and my problem is trying to scale the content of the Canvas to fit into the new destination size.

There are a number of posts with a similar theme e.g:

calculate new size and location on a CGRect

How to create an image of specific size from UIView

But these don't address the changing of ratios multiple times.

My Approach:

When I change the aspect ratio of the canvas, I make use of AVFoundation to calculate an aspect fitted rectangle which the subviews of the canvas should fit:

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10

Test Cases The Red frame is simply to visualise the Aspect Fitted Rectangle. As you can see whilst the aspect fitted rectangle is correct, the scaling of objects isn't working. This is especially true when I apply scale and rotation to the subviews (CanvasElement).

The logic where I am scaling the objects is clearly wrong:

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38

I am enclosing the CanvasElement Class as well if this helps:

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38final class CanvasElement: UIView {
39  
40  var rotation: CGFloat = 0
41  var baseFrame: CGRect = .zero
42
43  var id: String = UUID().uuidString
44  
45  // MARK: - Initialization
46  
47  override init(frame: CGRect) {
48    super.init(frame: frame)
49    storeState()
50    setupGesture()
51  }
52  
53  required init?(coder aDecoder: NSCoder) {
54    super.init(coder: aDecoder)
55  }
56  
57  // MARK: - Gesture Setup
58  
59  private func setupGesture() {
60    let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
61    let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
62    let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
63    addGestureRecognizer(panGestureRecognizer)
64    addGestureRecognizer(pinchGestureRecognizer)
65    addGestureRecognizer(rotateGestureRecognizer)
66  }
67  
68  // MARK: - Touches
69  
70  override func touchesBegan(_ touches: Set&lt;UITouch&gt;, with event: UIEvent?) {
71    super.touchesBegan(touches, with: event)
72    moveToFront()
73  }
74  
75  //MARK: - Gestures
76  
77  @objc
78  private func panGesture(_ sender: UIPanGestureRecognizer) {
79    let move = sender.translation(in: self)
80    transform = transform.concatenating(.init(translationX: move.x, y: move.y))
81    sender.setTranslation(CGPoint.zero, in: self)
82    storeState()
83  }
84  
85  @objc
86  private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
87    transform = transform.scaledBy(x: sender.scale, y: sender.scale)
88    sender.scale = 1
89    storeState()
90  }
91  
92  @objc
93  private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
94    rotation += sender.rotation
95    transform = transform.rotated(by: sender.rotation)
96    sender.rotation = 0
97    storeState()
98  }
99  
100  // MARK: - Miscelaneous
101  
102  func moveToFront() {
103    superview?.bringSubviewToFront(self)
104  }
105  
106  public func rotated(by degrees: CGFloat) {
107    transform = transform.rotated(by: degrees)
108    rotation += degrees
109  }
110  
111  func storeState() {
112    print(&quot;&quot;&quot;
113    Element Frame = \(frame)
114    Element Bounds = \(bounds)
115    Element Center = \(center)
116    &quot;&quot;&quot;)
117    baseFrame = frame
118  }
119}
120

Any help or advise, approaches, with some actual examples would be great. Im not expecting anyone to provide full source code, but something which I could use as a basis.

Thank you for taking the time to read my question.

ANSWER

Answered 2022-Feb-06 at 10:03

Here are a few thoughts and findings while playing around with this

1. Is the right scale factor being used?

The scaling you use is a bit custom and cannot be compared directly to the examples which has just 1 scale factor like 2 or 3. However, your scale factor has 2 dimensions but I see you compensate for this to get the minimum of the width and height scaling:

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38final class CanvasElement: UIView {
39  
40  var rotation: CGFloat = 0
41  var baseFrame: CGRect = .zero
42
43  var id: String = UUID().uuidString
44  
45  // MARK: - Initialization
46  
47  override init(frame: CGRect) {
48    super.init(frame: frame)
49    storeState()
50    setupGesture()
51  }
52  
53  required init?(coder aDecoder: NSCoder) {
54    super.init(coder: aDecoder)
55  }
56  
57  // MARK: - Gesture Setup
58  
59  private func setupGesture() {
60    let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
61    let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
62    let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
63    addGestureRecognizer(panGestureRecognizer)
64    addGestureRecognizer(pinchGestureRecognizer)
65    addGestureRecognizer(rotateGestureRecognizer)
66  }
67  
68  // MARK: - Touches
69  
70  override func touchesBegan(_ touches: Set&lt;UITouch&gt;, with event: UIEvent?) {
71    super.touchesBegan(touches, with: event)
72    moveToFront()
73  }
74  
75  //MARK: - Gestures
76  
77  @objc
78  private func panGesture(_ sender: UIPanGestureRecognizer) {
79    let move = sender.translation(in: self)
80    transform = transform.concatenating(.init(translationX: move.x, y: move.y))
81    sender.setTranslation(CGPoint.zero, in: self)
82    storeState()
83  }
84  
85  @objc
86  private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
87    transform = transform.scaledBy(x: sender.scale, y: sender.scale)
88    sender.scale = 1
89    storeState()
90  }
91  
92  @objc
93  private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
94    rotation += sender.rotation
95    transform = transform.rotated(by: sender.rotation)
96    sender.rotation = 0
97    storeState()
98  }
99  
100  // MARK: - Miscelaneous
101  
102  func moveToFront() {
103    superview?.bringSubviewToFront(self)
104  }
105  
106  public func rotated(by degrees: CGFloat) {
107    transform = transform.rotated(by: degrees)
108    rotation += degrees
109  }
110  
111  func storeState() {
112    print(&quot;&quot;&quot;
113    Element Frame = \(frame)
114    Element Bounds = \(bounds)
115    Element Center = \(center)
116    &quot;&quot;&quot;)
117    baseFrame = frame
118  }
119}
120let scale = min(aspectFittedFrame.size.width / canvas.frame.width,
121                aspectFittedFrame.size.height / canvas.frame.height)
122

In my opinion, I don't think this is the right scale factor. To me this compares new aspectFittedFrame with the new canvas frame

Aspect Ratio UIView translation scaling rotation Swift ios

when actually I believe the right scaling factor is to compare the new aspectFittedFrame with the previous canvas frame

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38final class CanvasElement: UIView {
39  
40  var rotation: CGFloat = 0
41  var baseFrame: CGRect = .zero
42
43  var id: String = UUID().uuidString
44  
45  // MARK: - Initialization
46  
47  override init(frame: CGRect) {
48    super.init(frame: frame)
49    storeState()
50    setupGesture()
51  }
52  
53  required init?(coder aDecoder: NSCoder) {
54    super.init(coder: aDecoder)
55  }
56  
57  // MARK: - Gesture Setup
58  
59  private func setupGesture() {
60    let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
61    let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
62    let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
63    addGestureRecognizer(panGestureRecognizer)
64    addGestureRecognizer(pinchGestureRecognizer)
65    addGestureRecognizer(rotateGestureRecognizer)
66  }
67  
68  // MARK: - Touches
69  
70  override func touchesBegan(_ touches: Set&lt;UITouch&gt;, with event: UIEvent?) {
71    super.touchesBegan(touches, with: event)
72    moveToFront()
73  }
74  
75  //MARK: - Gestures
76  
77  @objc
78  private func panGesture(_ sender: UIPanGestureRecognizer) {
79    let move = sender.translation(in: self)
80    transform = transform.concatenating(.init(translationX: move.x, y: move.y))
81    sender.setTranslation(CGPoint.zero, in: self)
82    storeState()
83  }
84  
85  @objc
86  private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
87    transform = transform.scaledBy(x: sender.scale, y: sender.scale)
88    sender.scale = 1
89    storeState()
90  }
91  
92  @objc
93  private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
94    rotation += sender.rotation
95    transform = transform.rotated(by: sender.rotation)
96    sender.rotation = 0
97    storeState()
98  }
99  
100  // MARK: - Miscelaneous
101  
102  func moveToFront() {
103    superview?.bringSubviewToFront(self)
104  }
105  
106  public func rotated(by degrees: CGFloat) {
107    transform = transform.rotated(by: degrees)
108    rotation += degrees
109  }
110  
111  func storeState() {
112    print(&quot;&quot;&quot;
113    Element Frame = \(frame)
114    Element Bounds = \(bounds)
115    Element Center = \(center)
116    &quot;&quot;&quot;)
117    baseFrame = frame
118  }
119}
120let scale = min(aspectFittedFrame.size.width / canvas.frame.width,
121                aspectFittedFrame.size.height / canvas.frame.height)
122let scale
123    = min(aspectFittedFrame.size.width / sourceRectangleSize.width,
124          aspectFittedFrame.size.height / sourceRectangleSize.height)
125

UIView translate scale rotate aspect ratio Swift ios

2. Is the scale being applied on the right values?

If you notice, the first order from 1:1 to 16:9 works quite well. However after that it does not seem to work and I believe the issue is here:

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38final class CanvasElement: UIView {
39  
40  var rotation: CGFloat = 0
41  var baseFrame: CGRect = .zero
42
43  var id: String = UUID().uuidString
44  
45  // MARK: - Initialization
46  
47  override init(frame: CGRect) {
48    super.init(frame: frame)
49    storeState()
50    setupGesture()
51  }
52  
53  required init?(coder aDecoder: NSCoder) {
54    super.init(coder: aDecoder)
55  }
56  
57  // MARK: - Gesture Setup
58  
59  private func setupGesture() {
60    let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
61    let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
62    let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
63    addGestureRecognizer(panGestureRecognizer)
64    addGestureRecognizer(pinchGestureRecognizer)
65    addGestureRecognizer(rotateGestureRecognizer)
66  }
67  
68  // MARK: - Touches
69  
70  override func touchesBegan(_ touches: Set&lt;UITouch&gt;, with event: UIEvent?) {
71    super.touchesBegan(touches, with: event)
72    moveToFront()
73  }
74  
75  //MARK: - Gestures
76  
77  @objc
78  private func panGesture(_ sender: UIPanGestureRecognizer) {
79    let move = sender.translation(in: self)
80    transform = transform.concatenating(.init(translationX: move.x, y: move.y))
81    sender.setTranslation(CGPoint.zero, in: self)
82    storeState()
83  }
84  
85  @objc
86  private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
87    transform = transform.scaledBy(x: sender.scale, y: sender.scale)
88    sender.scale = 1
89    storeState()
90  }
91  
92  @objc
93  private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
94    rotation += sender.rotation
95    transform = transform.rotated(by: sender.rotation)
96    sender.rotation = 0
97    storeState()
98  }
99  
100  // MARK: - Miscelaneous
101  
102  func moveToFront() {
103    superview?.bringSubviewToFront(self)
104  }
105  
106  public func rotated(by degrees: CGFloat) {
107    transform = transform.rotated(by: degrees)
108    rotation += degrees
109  }
110  
111  func storeState() {
112    print(&quot;&quot;&quot;
113    Element Frame = \(frame)
114    Element Bounds = \(bounds)
115    Element Center = \(center)
116    &quot;&quot;&quot;)
117    baseFrame = frame
118  }
119}
120let scale = min(aspectFittedFrame.size.width / canvas.frame.width,
121                aspectFittedFrame.size.height / canvas.frame.height)
122let scale
123    = min(aspectFittedFrame.size.width / sourceRectangleSize.width,
124          aspectFittedFrame.size.height / sourceRectangleSize.height)
125for case let canvasElement as CanvasElement in strongSelf.canvas.subviews
126{
127    canvasElement.frame.size = CGSize(
128        width: canvasElement.baseFrame.width * scale,
129        height: canvasElement.baseFrame.height * scale
130    )
131    
132    canvasElement.frame.origin = CGPoint(
133        x: aspectFittedFrame.origin.x
134            + canvasElement.baseFrame.origin.x * scale,
135        
136        y:  aspectFittedFrame.origin.y
137            + canvasElement.baseFrame.origin.y * scale
138    )
139}
140

The first time, the scale works well because canvas and the canvas elements are being scaled in sync or mapped properly:

Swift ios UIView scaling transformation rotation translation

However, if you go beyond that, because you are always scaling based on the base values your aspect ratio frame and your canvas elements are out of sync

UIView scaling transformation rotation translation Swift ios

So in the example of 1:1 -> 16:9 -> 3:2

  • Your viewport has been scaled twice 1:1 -> 16:9 and from 16:9 -> 3:2
  • Whereas your elements are scaled once each time, 1:1 -> 16:9 and 1:1 -> 3:2 because you always scale from the base values

So I feel to see the values within the red viewport, you need to apply the same continuous scaling based on the previous view rather than the base view.

Just for an immediate quick fix, I update the base values of the canvas element after each change in canvas size by calling canvasElement.storeState():

1let sourceRectangleSize = canvas.frame.size
2
3canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
4view.layoutIfNeeded()
5
6let destinationRectangleSize = canvas.frame.size
7
8let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
9ratioVisualizer.frame = aspectFittedFrame
10@objc
11private func setRatio(_ control: UISegmentedControl) {
12  guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
13  
14  let sourceRectangleSize = canvas.frame.size
15 
16  canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
17  view.layoutIfNeeded()
18 
19  let destinationRectangleSize = canvas.frame.size
20  
21  let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
22  ratioVisualizer.frame = aspectFittedFrame
23  
24  let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
25  
26  for case let canvasElement as CanvasElement in canvas.subviews {
27  
28    canvasElement.frame.size = CGSize(
29      width: canvasElement.baseFrame.width * scale,
30      height: canvasElement.baseFrame.height * scale
31    )
32    canvasElement.frame.origin = CGPoint(
33      x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
34      y:  aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
35    )
36  }
37}
38final class CanvasElement: UIView {
39  
40  var rotation: CGFloat = 0
41  var baseFrame: CGRect = .zero
42
43  var id: String = UUID().uuidString
44  
45  // MARK: - Initialization
46  
47  override init(frame: CGRect) {
48    super.init(frame: frame)
49    storeState()
50    setupGesture()
51  }
52  
53  required init?(coder aDecoder: NSCoder) {
54    super.init(coder: aDecoder)
55  }
56  
57  // MARK: - Gesture Setup
58  
59  private func setupGesture() {
60    let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
61    let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
62    let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
63    addGestureRecognizer(panGestureRecognizer)
64    addGestureRecognizer(pinchGestureRecognizer)
65    addGestureRecognizer(rotateGestureRecognizer)
66  }
67  
68  // MARK: - Touches
69  
70  override func touchesBegan(_ touches: Set&lt;UITouch&gt;, with event: UIEvent?) {
71    super.touchesBegan(touches, with: event)
72    moveToFront()
73  }
74  
75  //MARK: - Gestures
76  
77  @objc
78  private func panGesture(_ sender: UIPanGestureRecognizer) {
79    let move = sender.translation(in: self)
80    transform = transform.concatenating(.init(translationX: move.x, y: move.y))
81    sender.setTranslation(CGPoint.zero, in: self)
82    storeState()
83  }
84  
85  @objc
86  private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
87    transform = transform.scaledBy(x: sender.scale, y: sender.scale)
88    sender.scale = 1
89    storeState()
90  }
91  
92  @objc
93  private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
94    rotation += sender.rotation
95    transform = transform.rotated(by: sender.rotation)
96    sender.rotation = 0
97    storeState()
98  }
99  
100  // MARK: - Miscelaneous
101  
102  func moveToFront() {
103    superview?.bringSubviewToFront(self)
104  }
105  
106  public func rotated(by degrees: CGFloat) {
107    transform = transform.rotated(by: degrees)
108    rotation += degrees
109  }
110  
111  func storeState() {
112    print(&quot;&quot;&quot;
113    Element Frame = \(frame)
114    Element Bounds = \(bounds)
115    Element Center = \(center)
116    &quot;&quot;&quot;)
117    baseFrame = frame
118  }
119}
120let scale = min(aspectFittedFrame.size.width / canvas.frame.width,
121                aspectFittedFrame.size.height / canvas.frame.height)
122let scale
123    = min(aspectFittedFrame.size.width / sourceRectangleSize.width,
124          aspectFittedFrame.size.height / sourceRectangleSize.height)
125for case let canvasElement as CanvasElement in strongSelf.canvas.subviews
126{
127    canvasElement.frame.size = CGSize(
128        width: canvasElement.baseFrame.width * scale,
129        height: canvasElement.baseFrame.height * scale
130    )
131    
132    canvasElement.frame.origin = CGPoint(
133        x: aspectFittedFrame.origin.x
134            + canvasElement.baseFrame.origin.x * scale,
135        
136        y:  aspectFittedFrame.origin.y
137            + canvasElement.baseFrame.origin.y * scale
138    )
139}
140for case let canvasElement as CanvasElement in strongSelf.canvas.subviews
141{
142    canvasElement.frame.size = CGSize(
143        width: canvasElement.baseFrame.width * scale,
144        height: canvasElement.baseFrame.height * scale
145    )
146    
147    canvasElement.frame.origin = CGPoint(
148        x: aspectFittedFrame.origin.x
149            + canvasElement.baseFrame.origin.x * scale,
150        
151        y:  aspectFittedFrame.origin.y
152            + canvasElement.baseFrame.origin.y * scale
153    )
154    
155    // I added this
156    canvasElement.storeState()
157}
158

The result is perhaps closer to what you want ?

UIView Scaling Translation Rotation Aspect Ratio Swift iOS

Final thoughts

While this might fix your issue, you will notice that it is not possible to come back to the original state as at each step a transformation is applied.

A solution could be to store the current values mapped to a specific viewport aspect ratio and calculate the right sizes for the others so that if you needed to get back to the original, you could do that.

Source https://stackoverflow.com/questions/71004029

QUESTION

Data path &quot;&quot; must NOT have additional properties(extractCss) in Angular 13 while upgrading project

Asked 2022-Jan-27 at 14:41

I am facing an issue while upgrading my project from angular 8.2.1 to angular 13 version.

After a successful upgrade while preparing a build it is giving me the following error.

1Data path &quot;&quot; must NOT have additional properties(extractCss).
2

I already renamed styleext with style in the angular.json file, but still not able to find the root cause for this error.

angular.json file is as follows.

1Data path &quot;&quot; must NOT have additional properties(extractCss).
2  {
3  &quot;$schema&quot;: &quot;./node_modules/@angular/cli/lib/config/schema.json&quot;,
4  &quot;version&quot;: 1,
5  &quot;newProjectRoot&quot;: &quot;projects&quot;,
6  &quot;projects&quot;: {
7    &quot;qiwkCollaborator&quot;: {
8      &quot;projectType&quot;: &quot;application&quot;,
9      &quot;schematics&quot;: {
10        &quot;@schematics/angular:component&quot;: {
11          &quot;style&quot;: &quot;scss&quot;
12        }
13      },
14      &quot;root&quot;: &quot;&quot;,
15      &quot;sourceRoot&quot;: &quot;src&quot;,
16      &quot;prefix&quot;: &quot;app&quot;,
17      &quot;architect&quot;: {
18        &quot;build&quot;: {
19
20        /*  &quot;configurations&quot;: {
21            &quot;fr&quot;: {
22            &quot;aot&quot;: true,
23            &quot;outputPath&quot;: &quot;dist/qwikCollaborator/fr/&quot;,
24            &quot;i18nFile&quot;: &quot;src/translate/messages.fr.xlf&quot;,      
25            &quot;i18nFormat&quot;: &quot;xlf&quot;,      
26            &quot;i18nLocale&quot;: &quot;fr&quot;,      
27            &quot;i18nMissingTranslation&quot;: &quot;error&quot;    
28             },
29             &quot;en&quot;: {
30            &quot;aot&quot;: true,
31            &quot;outputPath&quot;: &quot;dist/qwikCollaborator/en/&quot;,
32            &quot;i18nFile&quot;: &quot;src/translate/messages.en.xlf&quot;,      
33            &quot;i18nFormat&quot;: &quot;xlf&quot;,      
34            &quot;i18nLocale&quot;: &quot;en&quot;,      
35            &quot;i18nMissingTranslation&quot;: &quot;error&quot;    
36             }  
37           },*/
38          &quot;builder&quot;: &quot;@angular-devkit/build-angular:browser&quot;,
39          &quot;options&quot;: {
40            &quot;outputPath&quot;: &quot;dist/qiwkCollaborator&quot;,
41            &quot;index&quot;: &quot;src/index.html&quot;,
42            &quot;main&quot;: &quot;src/main.ts&quot;,
43            &quot;polyfills&quot;: &quot;src/polyfills.ts&quot;,
44            &quot;tsConfig&quot;: &quot;tsconfig.app.json&quot;,
45            &quot;aot&quot;: false,
46            &quot;assets&quot;: [
47              &quot;src/favicon.ico&quot;,
48              &quot;src/assets&quot;
49            ],
50            &quot;styles&quot;: [
51              &quot;src/styles.scss&quot;,
52              &quot;src/assets/css/custom-mobile.css&quot;,
53              &quot;src/assets/css/custom.css&quot;
54            ],
55            &quot;scripts&quot;: [
56              &quot;node_modules/jquery/dist/jquery.min.js&quot;,
57              &quot;src/assets/js/qwikCollaborator.js&quot;
58            ]
59          },
60          &quot;configurations&quot;: {
61          &quot;es5&quot;: {
62        &quot;tsConfig&quot;: &quot;./tsconfig.es5.json&quot;
63      },
64            &quot;production&quot;: {
65              &quot;fileReplacements&quot;: [
66                {
67                  &quot;replace&quot;: &quot;src/environments/environment.ts&quot;,
68                  &quot;with&quot;: &quot;src/environments/environment.prod.ts&quot;
69                }
70              ],
71              &quot;optimization&quot;: true,
72              &quot;outputHashing&quot;: &quot;all&quot;,
73              &quot;sourceMap&quot;: false,
74              &quot;extractCss&quot;: true,
75              &quot;namedChunks&quot;: false,
76              &quot;aot&quot;: true,
77              &quot;extractLicenses&quot;: true,
78              &quot;vendorChunk&quot;: false,
79              &quot;buildOptimizer&quot;: true,
80              &quot;budgets&quot;: [
81                {
82                  &quot;type&quot;: &quot;initial&quot;,
83                  &quot;maximumWarning&quot;: &quot;2mb&quot;,
84                  &quot;maximumError&quot;: &quot;5mb&quot;
85                },
86                {
87                  &quot;type&quot;: &quot;anyComponentStyle&quot;,
88                  &quot;maximumWarning&quot;: &quot;6kb&quot;,
89                  &quot;maximumError&quot;: &quot;10kb&quot;
90                }
91              ]
92            }
93          }
94        },
95        &quot;serve&quot;: {
96         /*  &quot;configurations&quot;: {
97            &quot;fr&quot;: {
98            &quot;browserTarget&quot;: &quot;qwikCollaborator:build:fr&quot; 
99            },
100            &quot;en&quot;: {
101            &quot;browserTarget&quot;: &quot;qwikCollaborator:build:en&quot; 
102            } ,
103            },*/
104          &quot;builder&quot;: &quot;@angular-devkit/build-angular:dev-server&quot;,
105          &quot;options&quot;: {
106            &quot;browserTarget&quot;: &quot;qiwkCollaborator:build&quot;
107          },
108         
109          &quot;configurations&quot;: {
110          &quot;es5&quot;: {
111        &quot;browserTarget&quot;: &quot;qiwkCollaborator:build:es5&quot;
112      },
113            &quot;production&quot;: {
114              &quot;browserTarget&quot;: &quot;qiwkCollaborator:build:es5&quot;
115            }
116          }
117        },
118        &quot;extract-i18n&quot;: {
119          &quot;builder&quot;: &quot;@angular-devkit/build-angular:extract-i18n&quot;,
120          &quot;options&quot;: {
121            &quot;browserTarget&quot;: &quot;qiwkCollaborator:build&quot;
122          }
123        },
124        &quot;test&quot;: {
125          &quot;builder&quot;: &quot;@angular-devkit/build-angular:karma&quot;,
126          &quot;options&quot;: {
127            &quot;main&quot;: &quot;src/test.ts&quot;,
128            &quot;polyfills&quot;: &quot;src/polyfills.ts&quot;,
129            &quot;tsConfig&quot;: &quot;tsconfig.spec.json&quot;,
130            &quot;karmaConfig&quot;: &quot;karma.conf.js&quot;,
131            &quot;assets&quot;: [
132              &quot;src/favicon.ico&quot;,
133              &quot;src/assets&quot;
134            ],
135            &quot;styles&quot;: [
136              &quot;src/styles.scss&quot;
137            ],
138            &quot;scripts&quot;: [&quot;../node_modules/jspdf/dist/jspdf.min.js&quot;]
139          }
140        },
141        &quot;lint&quot;: {
142          &quot;builder&quot;: &quot;@angular-devkit/build-angular:tslint&quot;,
143          &quot;options&quot;: {
144            &quot;tsConfig&quot;: [
145              &quot;tsconfig.app.json&quot;,
146              &quot;tsconfig.spec.json&quot;,
147              &quot;e2e/tsconfig.json&quot;
148            ],
149            &quot;exclude&quot;: [
150              &quot;**/node_modules/**&quot;
151            ]
152          }
153        },
154        &quot;e2e&quot;: {
155          &quot;builder&quot;: &quot;@angular-devkit/build-angular:protractor&quot;,
156          &quot;options&quot;: {
157            &quot;protractorConfig&quot;: &quot;e2e/protractor.conf.js&quot;,
158            &quot;devServerTarget&quot;: &quot;qiwkCollaborator:serve&quot;
159          },
160          &quot;configurations&quot;: {
161            &quot;production&quot;: {
162              &quot;devServerTarget&quot;: &quot;qiwkCollaborator:serve:production&quot;
163            }
164          }
165        }
166      }
167    }},
168  &quot;defaultProject&quot;: &quot;qiwkCollaborator&quot;
169}
170

How to get rid of this additional property?

can anyone help me with this?

thanks in advance!

ANSWER

Answered 2021-Dec-14 at 12:45

Just remove the "extractCss": true from your production environment, it will resolve the problem.

The reason about it is extractCss is deprecated, and it's value is true by default. See more here: Extracting CSS into JS with Angular 11 (deprecated extractCss)

Source https://stackoverflow.com/questions/70344098

QUESTION

Possible ODR-violations when using a constexpr variable in the definition of an inline function (in C++14)

Asked 2022-Jan-12 at 10:38

(Note! This question particularly covers the state of C++14, before the introduction of inline variables in C++17)

TLDR; Question
  • What constitutes odr-use of a constexpr variable used in the definition of an inline function, such that multiple definitions of the function violates [basic.def.odr]/6?

(... likely [basic.def.odr]/3; but could this silently introduce UB in a program as soon as, say, the address of such a constexpr variable is taken in the context of the inline function's definition?)

TLDR example: does a program where doMath() defined as follows:

1// some_math.h
2#pragma once
3
4// Forced by some guideline abhorring literals.
5constexpr int kTwo{2};
6inline int doMath(int arg) { return std::max(arg, kTwo); }
7                                 // std::max(const int&amp;, const int&amp;)
8

have undefined behaviour as soon as doMath() is defined in two different translation units (say by inclusion of some_math.h and subsequent use of doMath())?

Background

Consider the following example:

1// some_math.h
2#pragma once
3
4// Forced by some guideline abhorring literals.
5constexpr int kTwo{2};
6inline int doMath(int arg) { return std::max(arg, kTwo); }
7                                 // std::max(const int&amp;, const int&amp;)
8// constants.h
9#pragma once
10constexpr int kFoo{42};
11
12// foo.h
13#pragma once
14#include &quot;constants.h&quot;
15inline int foo(int arg) { return arg * kFoo; }  // #1: kFoo not odr-used
16
17// a.cpp
18#include &quot;foo.h&quot;
19int a() { return foo(1); }  // foo odr-used
20
21// b.cpp
22#include &quot;foo.h&quot;
23int b() { return foo(2); }  // foo odr-used
24

compiled for C++14, particularly before inline variables and thus before constexpr variables were implicitly inline.

The inline function foo (which has external linkage) is odr-used in both translation units (TU) associated with a.cpp and b.cpp, say TU_a and TU_b, and shall thus be defined in both of these TU's ([basic.def.odr]/4).

[basic.def.odr]/6 covers the requirements for when such multiple definitions (different TU's) may appear, and particularly /6.1 and /6.2 is relevant in this context [emphasis mine]:

There can be more than one definition of a [...] inline function with external linkage [...] in a program provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements. Given such an entity named D defined in more than one translation unit, then

  • /6.1 each definition of D shall consist of the same sequence of tokens; and

  • /6.2 in each definition of D, corresponding names, looked up according to [basic.lookup], shall refer to an entity defined within the definition of D, or shall refer to the same entity, after overload resolution ([over.match]) and after matching of partial template specialization ([temp.over]), except that a name can refer to a non-volatile const object with internal or no linkage if the object has the same literal type in all definitions of D, and the object is initialized with a constant expression ([expr.const]), and the object is not odr-used, and the object has the same value in all definitions of D; and

  • ...

If the definitions of D do not satisfy these requirements, then the behavior is undefined.

/6.1 is fulfilled.

/6.2 if fulfilled if kFoo in foo:

  1. [OK] is const with internal linkage
  2. [OK] is initialized with a constant expressions
  3. [OK] is of same literal type over all definitions of foo
  4. [OK] has the same value in all definitions of foo
  5. [??] is not odr-used.

I interpret 5 as particularly "not odr-used in the definition of foo"; this could arguably have been clearer in the wording. However if kFoo is odr-used (at least in the definition of foo) I interpret it as opening up for odr-violations and subsequent undefined behavior, due to violation of [basic.def.odr]/6.

Afaict [basic.def.odr]/3 governs whether kFoo is odr-used or not,

A variable x whose name appears as a potentially-evaluated expression ex is odr-used by ex unless applying the lvalue-to-rvalue conversion ([conv.lval]) to x yields a constant expression ([expr.const]) that does not invoke any non-trivial functions and, if x is an object, ex is an element of the set of potential results of an expression e, where either the lvalue-to-rvalue conversion ([conv.lval]) is applied to e, or e is a discarded-value expression (Clause [expr]). [...]

but I'm having a hard time to understand whether kFoo is considered as odr-used e.g. if its address is taken within the definition of foo, or e.g. whether if its address is taken outside of the definition of foo or not affects whether [basic.def.odr]/6.2 is fulfilled or not.


Further details

Particularly, consider if foo is defined as:

1// some_math.h
2#pragma once
3
4// Forced by some guideline abhorring literals.
5constexpr int kTwo{2};
6inline int doMath(int arg) { return std::max(arg, kTwo); }
7                                 // std::max(const int&amp;, const int&amp;)
8// constants.h
9#pragma once
10constexpr int kFoo{42};
11
12// foo.h
13#pragma once
14#include &quot;constants.h&quot;
15inline int foo(int arg) { return arg * kFoo; }  // #1: kFoo not odr-used
16
17// a.cpp
18#include &quot;foo.h&quot;
19int a() { return foo(1); }  // foo odr-used
20
21// b.cpp
22#include &quot;foo.h&quot;
23int b() { return foo(2); }  // foo odr-used
24// #2
25inline int foo(int arg) { 
26    std::cout &lt;&lt; &quot;&amp;kFoo in foo() = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
27    return arg * kFoo; 
28}
29

and a() and b() are defined as:

1// some_math.h
2#pragma once
3
4// Forced by some guideline abhorring literals.
5constexpr int kTwo{2};
6inline int doMath(int arg) { return std::max(arg, kTwo); }
7                                 // std::max(const int&amp;, const int&amp;)
8// constants.h
9#pragma once
10constexpr int kFoo{42};
11
12// foo.h
13#pragma once
14#include &quot;constants.h&quot;
15inline int foo(int arg) { return arg * kFoo; }  // #1: kFoo not odr-used
16
17// a.cpp
18#include &quot;foo.h&quot;
19int a() { return foo(1); }  // foo odr-used
20
21// b.cpp
22#include &quot;foo.h&quot;
23int b() { return foo(2); }  // foo odr-used
24// #2
25inline int foo(int arg) { 
26    std::cout &lt;&lt; &quot;&amp;kFoo in foo() = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
27    return arg * kFoo; 
28}
29int a() { 
30    std::cout &lt;&lt; &quot;TU_a, &amp;kFoo = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
31    return foo(1); 
32}
33
34int b() { 
35    std::cout &lt;&lt; &quot;TU_b, &amp;kFoo = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
36    return foo(2); 
37}
38

then running a program which calls a() and b() in sequence produces:

1// some_math.h
2#pragma once
3
4// Forced by some guideline abhorring literals.
5constexpr int kTwo{2};
6inline int doMath(int arg) { return std::max(arg, kTwo); }
7                                 // std::max(const int&amp;, const int&amp;)
8// constants.h
9#pragma once
10constexpr int kFoo{42};
11
12// foo.h
13#pragma once
14#include &quot;constants.h&quot;
15inline int foo(int arg) { return arg * kFoo; }  // #1: kFoo not odr-used
16
17// a.cpp
18#include &quot;foo.h&quot;
19int a() { return foo(1); }  // foo odr-used
20
21// b.cpp
22#include &quot;foo.h&quot;
23int b() { return foo(2); }  // foo odr-used
24// #2
25inline int foo(int arg) { 
26    std::cout &lt;&lt; &quot;&amp;kFoo in foo() = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
27    return arg * kFoo; 
28}
29int a() { 
30    std::cout &lt;&lt; &quot;TU_a, &amp;kFoo = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
31    return foo(1); 
32}
33
34int b() { 
35    std::cout &lt;&lt; &quot;TU_b, &amp;kFoo = &quot; &lt;&lt; &amp;kFoo &lt;&lt; &quot;\n&quot;;
36    return foo(2); 
37}
38TU_a, &amp;kFoo    = 0x401db8
39&amp;kFoo in foo() = 0x401db8  // &lt;-- foo() in TU_a: 
40                           //     &amp;kFoo from TU_a
41
42TU_b, &amp;kFoo    = 0x401dbc
43&amp;kFoo in foo() = 0x401db8  // &lt;-- foo() in TU_b: 
44                           // !!! &amp;kFoo from TU_a
45

namely the address of the TU-local kFoo when accessed from the different a() and b() functions, but pointing to the same kFoo address when accessed from foo().

DEMO.

Does this program (with foo and a/b defined as per this section) have undefined behaviour?

A real life example would be where these constexpr variables represent mathematical constants, and where they are used, from within the definition of an inline function, as arguments to utility math functions such as std::max(), which takes its arguments by reference.

ANSWER

Answered 2021-Sep-08 at 16:34

In the OP's example with std::max, an ODR violation does indeed occur, and the program is ill-formed NDR. To avoid this issue, you might consider one of the following fixes:

  • give the doMath function internal linkage, or
  • move the declaration of kTwo inside doMath

A variable that is used by an expression is considered to be odr-used unless there is a certain kind of simple proof that the reference to the variable can be replaced by the compile-time constant value of the variable without changing the result of the expression. If such a simple proof exists, then the standard requires the compiler perform such a replacement; consequently the variable is not odr-used (in particular, it does not require a definition, and the issue described by the OP would be avoided because none of the translation units in which doMath is defined would actually reference a definition of kTwo). If the expression is too complicated, however, then all bets are off. The compiler might still replace the variable with its value, in which case the program may work as you expect; or the program may exhibit bugs or crash. That's the reality with IFNDR programs.

The case where the variable is immediately passed by reference to a function, with the reference binding directly, is one common case where the variable is used in a way that is too complicated and the compiler is not required to determine whether or not it may be replaced by its compile-time constant value. This is because doing so would necessarily require inspecting the definition of the function (such as std::max<int> in this example).

You can "help" the compiler by writing int(kTwo) and using that as the argument to std::max as opposed to kTwo itself; this prevents an odr-use since the lvalue-to-rvalue conversion is now immediately applied prior to calling the function. I don't think this is a great solution (I recommend one of the two solutions that I previously mentioned) but it has its uses (GoogleTest uses this in order to avoid introducing odr-uses in statements like EXPECT_EQ(2, kTwo)).

If you want to know more about how to understand the precise definition of odr-use, involving "potential results of an expression e...", that would be best addressed with a separate question.

Source https://stackoverflow.com/questions/69105602

QUESTION

How to work with classes extended from EnumClass and generated by build_runner in Dart/Flutter?

Asked 2021-Dec-17 at 12:03

For converting my GraphQL schema into Dart classes, I'm using the Ferry package, and I run this using build_runner.

In my database, I've defined the following enum type:

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2

Here is a translation of it (from schema.schema.gql.dart):

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2class GCurrency extends EnumClass {
3  const GCurrency._(String name) : super(name);
4
5  static const GCurrency CNY = _$gCurrencyCNY;
6
7  static const GCurrency EUR = _$gCurrencyEUR;
8
9  static const GCurrency PEN = _$gCurrencyPEN;
10
11  static const GCurrency USD = _$gCurrencyUSD;
12
13  static Serializer&lt;GCurrency&gt; get serializer =&gt; _$gCurrencySerializer;
14  static BuiltSet&lt;GCurrency&gt; get values =&gt; _$gCurrencyValues;
15  static GCurrency valueOf(String name) =&gt; _$gCurrencyValueOf(name);
16}
17

This class, in turn, is used to:

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2class GCurrency extends EnumClass {
3  const GCurrency._(String name) : super(name);
4
5  static const GCurrency CNY = _$gCurrencyCNY;
6
7  static const GCurrency EUR = _$gCurrencyEUR;
8
9  static const GCurrency PEN = _$gCurrencyPEN;
10
11  static const GCurrency USD = _$gCurrencyUSD;
12
13  static Serializer&lt;GCurrency&gt; get serializer =&gt; _$gCurrencySerializer;
14  static BuiltSet&lt;GCurrency&gt; get values =&gt; _$gCurrencyValues;
15  static GCurrency valueOf(String name) =&gt; _$gCurrencyValueOf(name);
16}
17class GCreateQuoteRequestVarsBuilder
18    implements
19        Builder&lt;GCreateQuoteRequestVars, GCreateQuoteRequestVarsBuilder&gt; {
20  _$GCreateQuoteRequestVars? _$v;
21
22....
23  _i2.GCurrency? _currency;
24  _i2.GCurrency? get currency =&gt; _$this._currency;
25  set currency(_i2.GCurrency? currency) =&gt; _$this._currency = currency;
26....
27}
28

I am trying to implement the following request method (some variables have been omitted for clarity):

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2class GCurrency extends EnumClass {
3  const GCurrency._(String name) : super(name);
4
5  static const GCurrency CNY = _$gCurrencyCNY;
6
7  static const GCurrency EUR = _$gCurrencyEUR;
8
9  static const GCurrency PEN = _$gCurrencyPEN;
10
11  static const GCurrency USD = _$gCurrencyUSD;
12
13  static Serializer&lt;GCurrency&gt; get serializer =&gt; _$gCurrencySerializer;
14  static BuiltSet&lt;GCurrency&gt; get values =&gt; _$gCurrencyValues;
15  static GCurrency valueOf(String name) =&gt; _$gCurrencyValueOf(name);
16}
17class GCreateQuoteRequestVarsBuilder
18    implements
19        Builder&lt;GCreateQuoteRequestVars, GCreateQuoteRequestVarsBuilder&gt; {
20  _$GCreateQuoteRequestVars? _$v;
21
22....
23  _i2.GCurrency? _currency;
24  _i2.GCurrency? get currency =&gt; _$this._currency;
25  set currency(_i2.GCurrency? currency) =&gt; _$this._currency = currency;
26....
27}
28  GCreateQuoteRequestReq createQuoteRequest(List&lt;Object&gt; values) =&gt; GCreateQuoteRequestReq(
29    (b) =&gt; b
30      ..vars.vehicle = values[0] as String
31      ..vars.body = values[1] as String
32      ..vars.currency = values[5] as GCurrency
33  );
34

There is a problem with values[5], which is a String type, and I need to cast it to the right type, which should be GCurrency, but I'm getting this error:

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2class GCurrency extends EnumClass {
3  const GCurrency._(String name) : super(name);
4
5  static const GCurrency CNY = _$gCurrencyCNY;
6
7  static const GCurrency EUR = _$gCurrencyEUR;
8
9  static const GCurrency PEN = _$gCurrencyPEN;
10
11  static const GCurrency USD = _$gCurrencyUSD;
12
13  static Serializer&lt;GCurrency&gt; get serializer =&gt; _$gCurrencySerializer;
14  static BuiltSet&lt;GCurrency&gt; get values =&gt; _$gCurrencyValues;
15  static GCurrency valueOf(String name) =&gt; _$gCurrencyValueOf(name);
16}
17class GCreateQuoteRequestVarsBuilder
18    implements
19        Builder&lt;GCreateQuoteRequestVars, GCreateQuoteRequestVarsBuilder&gt; {
20  _$GCreateQuoteRequestVars? _$v;
21
22....
23  _i2.GCurrency? _currency;
24  _i2.GCurrency? get currency =&gt; _$this._currency;
25  set currency(_i2.GCurrency? currency) =&gt; _$this._currency = currency;
26....
27}
28  GCreateQuoteRequestReq createQuoteRequest(List&lt;Object&gt; values) =&gt; GCreateQuoteRequestReq(
29    (b) =&gt; b
30      ..vars.vehicle = values[0] as String
31      ..vars.body = values[1] as String
32      ..vars.currency = values[5] as GCurrency
33  );
34The name 'GCurrency' isn't a type, so it can't be used in an 'as' expression.
35Try changing the name to the name of an existing type, or creating a type with the name 'GCurrency'.
36

According to documentation I need to import the following files only for my tasks:

1CREATE TYPE my_schm.currency AS ENUM ('CNY','EUR','PEN','USD');
2class GCurrency extends EnumClass {
3  const GCurrency._(String name) : super(name);
4
5  static const GCurrency CNY = _$gCurrencyCNY;
6
7  static const GCurrency EUR = _$gCurrencyEUR;
8
9  static const GCurrency PEN = _$gCurrencyPEN;
10
11  static const GCurrency USD = _$gCurrencyUSD;
12
13  static Serializer&lt;GCurrency&gt; get serializer =&gt; _$gCurrencySerializer;
14  static BuiltSet&lt;GCurrency&gt; get values =&gt; _$gCurrencyValues;
15  static GCurrency valueOf(String name) =&gt; _$gCurrencyValueOf(name);
16}
17class GCreateQuoteRequestVarsBuilder
18    implements
19        Builder&lt;GCreateQuoteRequestVars, GCreateQuoteRequestVarsBuilder&gt; {
20  _$GCreateQuoteRequestVars? _$v;
21
22....
23  _i2.GCurrency? _currency;
24  _i2.GCurrency? get currency =&gt; _$this._currency;
25  set currency(_i2.GCurrency? currency) =&gt; _$this._currency = currency;
26....
27}
28  GCreateQuoteRequestReq createQuoteRequest(List&lt;Object&gt; values) =&gt; GCreateQuoteRequestReq(
29    (b) =&gt; b
30      ..vars.vehicle = values[0] as String
31      ..vars.body = values[1] as String
32      ..vars.currency = values[5] as GCurrency
33  );
34The name 'GCurrency' isn't a type, so it can't be used in an 'as' expression.
35Try changing the name to the name of an existing type, or creating a type with the name 'GCurrency'.
36import '../loggedin.data.gql.dart';
37import '../loggedin.req.gql.dart';
38import '../loggedin.var.gql.dart';
39

ANSWER

Answered 2021-Dec-11 at 09:01

You should be able to use the class GCurrency. Can you vars.currency = GCurrency.valueOf(values[5])?

Source https://stackoverflow.com/questions/70273672

QUESTION

Python best way to 'swap' words (multiple characters) in a string?

Asked 2021-Dec-15 at 08:30

Consider the following examples:

1string_now = 'apple and avocado'
2stringthen = string_now.swap('apple', 'avocado') # stringthen = 'avocado and apple'
3

and:

1string_now = 'apple and avocado'
2stringthen = string_now.swap('apple', 'avocado') # stringthen = 'avocado and apple'
3string_now = 'fffffeeeeeddffee'
4stringthen = string_now.swap('fffff', 'eeeee') # stringthen = 'eeeeefffffddffee'
5

Approaches discussed in Swap character of string in Python do not work, as the mapping technique used there only takes one character into consideration. Python's builtin str.maketrans() also only supports one-character translations, as when I try to do multiple characters, it throws the following error:

enter image description here

A chain of replace() methods is not only far from ideal (since I have many replacements to do, chaining replaces would be a big chunk of code) but because of its sequential nature, it will not translate things perfectly as:

1string_now = 'apple and avocado'
2stringthen = string_now.swap('apple', 'avocado') # stringthen = 'avocado and apple'
3string_now = 'fffffeeeeeddffee'
4stringthen = string_now.swap('fffff', 'eeeee') # stringthen = 'eeeeefffffddffee'
5string_now = 'apple and avocado'
6stringthen = string_now.replace('apple','avocado').replace('avocado','apple')
7

gives 'apple and apple' instead of 'avocado and apple'.

What's the best way to achieve this?

ANSWER

Answered 2021-Dec-03 at 04:41

I managed to make this function that does exactly what you want.

1string_now = 'apple and avocado'
2stringthen = string_now.swap('apple', 'avocado') # stringthen = 'avocado and apple'
3string_now = 'fffffeeeeeddffee'
4stringthen = string_now.swap('fffff', 'eeeee') # stringthen = 'eeeeefffffddffee'
5string_now = 'apple and avocado'
6stringthen = string_now.replace('apple','avocado').replace('avocado','apple')
7def swapwords(mystr, firstword, secondword):
8    splitstr = mystr.split(&quot; &quot;)
9
10    for i in range(len(splitstr)):
11        if splitstr[i] == firstword:
12            splitstr[i] = secondword
13            i+=1
14        if splitstr[i] == secondword:
15            splitstr[i] = firstword
16            i+=1
17
18    newstr = &quot; &quot;.join(splitstr)
19
20   return newstr
21

Basically, what this does is it takes in your string "Apples and Avacados", and splits it by spaces. Thus, each word gets indexed in an array splitstr[]. Using this, we can use a for loop to swap the words. The i+=1 is in order to ensure the words don't get swapped twice. Lastly, I join the string back using newstr= " ".join(splitstr) which joins the words separated by a space.

Running the following code gives us: Avacados and Apples.

Source https://stackoverflow.com/questions/70209111

QUESTION

ValueError: None values not supported. Code working properly on CPU/GPU but not on TPU

Asked 2021-Nov-09 at 12:35

I am trying to train a seq2seq model for language translation, and I am copy-pasting code from this Kaggle Notebook on Google Colab. The code is working fine with CPU and GPU, but it is giving me errors while training on a TPU. This same question has been already asked here.

Here is my code:

1    strategy = tf.distribute.experimental.TPUStrategy(resolver)
2    
3    with strategy.scope():
4      model = create_model()
5      model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy')
6    
7    model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size),
8                        steps_per_epoch = train_samples // batch_size,
9                        epochs = epochs,
10                        validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
11                        validation_steps = val_samples // batch_size)
12

Traceback:

1    strategy = tf.distribute.experimental.TPUStrategy(resolver)
2    
3    with strategy.scope():
4      model = create_model()
5      model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy')
6    
7    model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size),
8                        steps_per_epoch = train_samples // batch_size,
9                        epochs = epochs,
10                        validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
11                        validation_steps = val_samples // batch_size)
12Epoch 1/2
13---------------------------------------------------------------------------
14ValueError                                Traceback (most recent call last)
15&lt;ipython-input-60-940fe0ee3c8b&gt; in &lt;module&gt;()
16      3                     epochs = epochs,
17      4                     validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
18----&gt; 5                     validation_steps = val_samples // batch_size)
19
2010 frames
21/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
22    992           except Exception as e:  # pylint:disable=broad-except
23    993             if hasattr(e, &quot;ag_error_metadata&quot;):
24--&gt; 994               raise e.ag_error_metadata.to_exception(e)
25    995             else:
26    996               raise
27
28ValueError: in user code:
29    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function  *
30    return step_function(self, iterator)
31    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function  **
32    outputs = model.distribute_strategy.run(run_step, args=(data,))
33...
34ValueError: None values not supported.
35

I couldn't figure out the error, and I think the error is because of this generate_batch function:

1    strategy = tf.distribute.experimental.TPUStrategy(resolver)
2    
3    with strategy.scope():
4      model = create_model()
5      model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy')
6    
7    model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size),
8                        steps_per_epoch = train_samples // batch_size,
9                        epochs = epochs,
10                        validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
11                        validation_steps = val_samples // batch_size)
12Epoch 1/2
13---------------------------------------------------------------------------
14ValueError                                Traceback (most recent call last)
15&lt;ipython-input-60-940fe0ee3c8b&gt; in &lt;module&gt;()
16      3                     epochs = epochs,
17      4                     validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
18----&gt; 5                     validation_steps = val_samples // batch_size)
19
2010 frames
21/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
22    992           except Exception as e:  # pylint:disable=broad-except
23    993             if hasattr(e, &quot;ag_error_metadata&quot;):
24--&gt; 994               raise e.ag_error_metadata.to_exception(e)
25    995             else:
26    996               raise
27
28ValueError: in user code:
29    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function  *
30    return step_function(self, iterator)
31    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function  **
32    outputs = model.distribute_strategy.run(run_step, args=(data,))
33...
34ValueError: None values not supported.
35X, y = lines['english_sentence'], lines['hindi_sentence']
36X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 34)
37
38def generate_batch(X = X_train, y = y_train, batch_size = 128):
39    while True:
40        for j in range(0, len(X), batch_size):
41 
42            encoder_input_data = np.zeros((batch_size, max_length_src), dtype='float32')
43            decoder_input_data = np.zeros((batch_size, max_length_tar), dtype='float32')
44            decoder_target_data = np.zeros((batch_size, max_length_tar, num_decoder_tokens), dtype='float32')
45            
46            for i, (input_text, target_text) in enumerate(zip(X[j:j + batch_size], y[j:j + batch_size])):
47                for t, word in enumerate(input_text.split()):
48                    encoder_input_data[i, t] = input_token_index[word]
49                for t, word in enumerate(target_text.split()):
50                    if t&lt;len(target_text.split())-1:
51                        decoder_input_data[i, t] = target_token_index[word]
52                    if t&gt;0:
53
54                        decoder_target_data[i, t - 1, target_token_index[word]] = 1.
55            yield([encoder_input_data, decoder_input_data], decoder_target_data)
56

My Colab notebook - here
Kaggle dataset - here
TensorFlow version - 2.6

Edit - Please don't tell me to down-grade TensorFlow/Keras version to 1.x. I can down-grade it to TensorFlow 2.0, 2.1, 2.3 but not 1.x. I don't understand TensorFlow 1.x. Also, there is no point in using a 3-year-old version.

ANSWER

Answered 2021-Nov-09 at 06:27

Need to down-grade to Keras 1.0.2 If works then great, otherwise I will tell other solution.

Source https://stackoverflow.com/questions/69752055

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Translation

Tutorials and Learning Resources are not available at this moment for Translation

Share this Page

share link

Get latest updates on Translation