By continuing you indicate that you have read and agree to our Terms of service and Privacy policy
By continuing you indicate that you have read and agree to our Terms of service and Privacy policy
Popular Releases
Popular Libraries
New Libraries
Top Authors
Trending Kits
Trending Discussions
Learning
Explore Related Topics
axios | v0.26.1 |
RxJava | 3.1.4 |
jadx | 1.3.5 |
fetch | |
RxSwift | Atlas |
axios v0.26.1 |
RxJava 3.1.4 |
jadx 1.3.5 |
fetch |
RxSwift Atlas |
by axios javascript
92140 MIT
Promise based HTTP client for the browser and node.js
by ReactiveX java
45971 Apache-2.0
RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.
by skylot java
29830 Apache-2.0
Dex to Java decompiler
by caolan javascript
27509 MIT
Async utilities for node and the browser
by ReactiveX typescript
26586 Apache-2.0
A reactive programming library for JavaScript
by github javascript
25051 MIT
A window.fetch JavaScript polyfill.
by ReactiveX swift
22027 NOASSERTION
Reactive Programming in Swift
by ramda javascript
21915 MIT
:ram: Practical functional Javascript
by rollup javascript
21476 NOASSERTION
Next-generation ES module bundler
by axios javascript
92140 MIT
Promise based HTTP client for the browser and node.js
by ReactiveX java
45971 Apache-2.0
RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.
by skylot java
29830 Apache-2.0
Dex to Java decompiler
by caolan javascript
27509 MIT
Async utilities for node and the browser
by ReactiveX typescript
26586 Apache-2.0
A reactive programming library for JavaScript
by github javascript
25051 MIT
A window.fetch JavaScript polyfill.
by ReactiveX swift
22027 NOASSERTION
Reactive Programming in Swift
by ramda javascript
21915 MIT
:ram: Practical functional Javascript
by rollup javascript
21476 NOASSERTION
Next-generation ES module bundler
by MudBlazor csharp
2931 MIT
Blazor Component Library based on Material design. The goal is to do more with Blazor, utilizing CSS and keeping Javascript to a bare minimum.
by gvergnaud typescript
2569 MIT
🎨 The exhaustive Pattern Matching library for TypeScript, with smart type inference.
by piscinajs typescript
2414 NOASSERTION
A fast, efficient Node.js Worker Thread Pool implementation
by ChrisTitusTech powershell
2410 MIT
This is the Ultimate Windows 10 Script from a creation from multiple debloat scripts and gists from github.
by lunatic-solutions rust
2249 NOASSERTION
Lunatic is an Erlang-inspired runtime for WebAssembly
by smol-rs rust
2074 NOASSERTION
A small and fast async runtime for Rust
by fuzhengwei java
1925 Apache-2.0
🌱《 Spring 手撸专栏》,本专栏以 Spring 源码学习为目的,通过手写简化版 Spring 框架,了解 Spring 核心原理。在手写的过程中会简化 Spring 源码,摘取整体框架中的核心逻辑,简化代码实现过程,保留核心功能,例如:IOC、AOP、Bean生命周期、上下文、作用域、资源处理等内容实现。
by iswbm python
1913
Python 黑魔法手册
by bobbyiliev html
1854 MIT
Free Introduction to Bash Scripting eBook
by MudBlazor csharp
2931 MIT
Blazor Component Library based on Material design. The goal is to do more with Blazor, utilizing CSS and keeping Javascript to a bare minimum.
by gvergnaud typescript
2569 MIT
🎨 The exhaustive Pattern Matching library for TypeScript, with smart type inference.
by piscinajs typescript
2414 NOASSERTION
A fast, efficient Node.js Worker Thread Pool implementation
by ChrisTitusTech powershell
2410 MIT
This is the Ultimate Windows 10 Script from a creation from multiple debloat scripts and gists from github.
by lunatic-solutions rust
2249 NOASSERTION
Lunatic is an Erlang-inspired runtime for WebAssembly
by smol-rs rust
2074 NOASSERTION
A small and fast async runtime for Rust
by fuzhengwei java
1925 Apache-2.0
🌱《 Spring 手撸专栏》,本专栏以 Spring 源码学习为目的,通过手写简化版 Spring 框架,了解 Spring 核心原理。在手写的过程中会简化 Spring 源码,摘取整体框架中的核心逻辑,简化代码实现过程,保留核心功能,例如:IOC、AOP、Bean生命周期、上下文、作用域、资源处理等内容实现。
by iswbm python
1913
Python 黑魔法手册
by bobbyiliev html
1854 MIT
Free Introduction to Bash Scripting eBook
1
98 Libraries
2234
2
62 Libraries
12959
3
49 Libraries
7836
4
36 Libraries
22896
5
36 Libraries
2616
6
34 Libraries
250
7
28 Libraries
1933
8
26 Libraries
1089
9
24 Libraries
490
10
23 Libraries
5125
No Trending Kits are available at this moment for Programming Style
QUESTION
Use for loop or multiple prints?
Asked 2022-Mar-01 at 21:31What programming style should I use?
1...
2print(1)
3print(2)
4
or
1...
2print(1)
3print(2)
4...
5for i in range(1, 3):
6 print(i)
7
The output is the same 1
and on the next line 2
, but which version should I use as a Python programmer?
I mean the first version is redundant or not?
ANSWER
Answered 2022-Mar-01 at 21:31It depends.
There is an old rule "three or more, use for
". (source)
On the other hand, sometimes unrolling a loop can offer a speed-up. (But that's generally more true in C or assembly.)
You should do what makes your program more clear.
For example, in the code below, I wrote out the calculations for the ABD matrix of a fiber reinforced composite laminate, because making nested loops would make it more complex in this case;
1...
2print(1)
3print(2)
4...
5for i in range(1, 3):
6 print(i)
7 for la, z2, z3 in zip(layers, lz2, lz3):
8 # first row
9 ABD[0][0] += la.Q̅11 * la.thickness # Hyer:1998, p. 290
10 ABD[0][1] += la.Q̅12 * la.thickness
11 ABD[0][2] += la.Q̅16 * la.thickness
12 ABD[0][3] += la.Q̅11 * z2
13 ABD[0][4] += la.Q̅12 * z2
14 ABD[0][5] += la.Q̅16 * z2
15 # second row
16 ABD[1][0] += la.Q̅12 * la.thickness
17 ABD[1][1] += la.Q̅22 * la.thickness
18 ABD[1][2] += la.Q̅26 * la.thickness
19 ABD[1][3] += la.Q̅12 * z2
20 ABD[1][4] += la.Q̅22 * z2
21 ABD[1][5] += la.Q̅26 * z2
22 # third row
23 ABD[2][0] += la.Q̅16 * la.thickness
24 ABD[2][1] += la.Q̅26 * la.thickness
25 ABD[2][2] += la.Q̅66 * la.thickness
26 ABD[2][3] += la.Q̅16 * z2
27 ABD[2][4] += la.Q̅26 * z2
28 ABD[2][5] += la.Q̅66 * z2
29 # fourth row
30 ABD[3][0] += la.Q̅11 * z2
31 ABD[3][1] += la.Q̅12 * z2
32 ABD[3][2] += la.Q̅16 * z2
33 ABD[3][3] += la.Q̅11 * z3
34 ABD[3][4] += la.Q̅12 * z3
35 ABD[3][5] += la.Q̅16 * z3
36 # fifth row
37 ABD[4][0] += la.Q̅12 * z2
38 ABD[4][1] += la.Q̅22 * z2
39 ABD[4][2] += la.Q̅26 * z2
40 ABD[4][3] += la.Q̅12 * z3
41 ABD[4][4] += la.Q̅22 * z3
42 ABD[4][5] += la.Q̅26 * z3
43 # sixth row
44 ABD[5][0] += la.Q̅16 * z2
45 ABD[5][1] += la.Q̅26 * z2
46 ABD[5][2] += la.Q̅66 * z2
47 ABD[5][3] += la.Q̅16 * z3
48 ABD[5][4] += la.Q̅26 * z3
49 ABD[5][5] += la.Q̅66 * z3
50 # Calculate unit thermal stress resultants.
51 # Hyer:1998, p. 445
52 Ntx += (la.Q̅11 * la.αx + la.Q̅12 * la.αy + la.Q̅16 * la.αxy) * la.thickness
53 Nty += (la.Q̅12 * la.αx + la.Q̅22 * la.αy + la.Q̅26 * la.αxy) * la.thickness
54 Ntxy += (la.Q̅16 * la.αx + la.Q̅26 * la.αy + la.Q̅66 * la.αxy) * la.thickness
55 # Calculate H matrix (derived from Barbero:2018, p. 181)
56 sb = 5 / 4 * (la.thickness - 4 * z3 / thickness ** 2)
57 H[0][0] += la.Q̅s44 * sb
58 H[0][1] += la.Q̅s45 * sb
59 H[1][0] += la.Q̅s45 * sb
60 H[1][1] += la.Q̅s55 * sb
61 # Calculate E3
62 c3 += la.thickness / la.E3
63
QUESTION
Why doesn't the rangeCheck method in the java.util.ArrayList class check for negative index?
Asked 2022-Feb-28 at 15:321/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11
From: jdk/ArrayList.java at jdk8-b120 · openjdk/jdk · GitHub
If we write the following code, both indexes are out of bounds, but the exception types are different.
1/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11import java.util.ArrayList;
12import java.util.List;
13
14public class Test {
15
16 public static void main(String[] args) {
17 List<String> list = new ArrayList<>();
18 list.add("");
19
20 try {
21 list.get(-1);
22 } catch (Exception e) {
23 e.printStackTrace();
24 }
25
26 try {
27 list.get(1);
28 } catch (Exception e) {
29 e.printStackTrace();
30 }
31 }
32
33}
34
The output is as follows:
1/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11import java.util.ArrayList;
12import java.util.List;
13
14public class Test {
15
16 public static void main(String[] args) {
17 List<String> list = new ArrayList<>();
18 list.add("");
19
20 try {
21 list.get(-1);
22 } catch (Exception e) {
23 e.printStackTrace();
24 }
25
26 try {
27 list.get(1);
28 } catch (Exception e) {
29 e.printStackTrace();
30 }
31 }
32
33}
34java.lang.ArrayIndexOutOfBoundsException: -1
35 at java.util.ArrayList.elementData(ArrayList.java:424)
36 at java.util.ArrayList.get(ArrayList.java:437)
37 at Test.main(Test.java:11)
38java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
39 at java.util.ArrayList.rangeCheck(ArrayList.java:659)
40 at java.util.ArrayList.get(ArrayList.java:435)
41 at Test.main(Test.java:17)
42
Related question:
rangeCheckForAdd
method in the java.util.ArrayList
check for negative index?java.util.Arrays.ArrayList
do index out-of-bounds checking?What confuses me is why their implementations are inconsistent? Are these methods written by different people with their own programming style? In other words, if out-of-bounds exceptions will eventually fire, then there is no need to check.
ANSWER
Answered 2022-Feb-28 at 14:23It's a micro-optimization. For code clarity you might prefer the same exception for both, but when you're in a hot loop you'll want to avoid an unnecessary operation. ArrayList
being an old class, the effect this has may have varied between times and JDK versions. If someone has enough interest they could benchmark it with 1.8 and newer JDKs to see how much of an optimization it is for get()
.
Since accessing a negative array index will fail anyway, there is no need to check for it. However the size of the ArrayList
is not always the same as the size of its internal array, so it needs to be checked explicitly.
As to why rangeCheckForAdd
does check for negative indexes, good question. Adding is slow anyway, so the micro-optimization wouldn't make much of a difference. Maybe they wanted consistent error messaging here.
QUESTION
Are java streams able to lazilly reduce from map/filter conditions?
Asked 2022-Jan-12 at 09:30I am using a functional programming style to solve the Leetcode easy question, Count the Number of Consistent Strings. The premise of this question is simple: count the amount of values for which the predicate of "all values are in another set" holds.
I have two approaches, one which I am fairly certain behaves as I want it to, and the other which I am less sure about. Both produce the correct output, but ideally they would stop evaluating other elements after the output is in a final state.
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12
In this solution, to the best of my knowledge, the allMatch statement will terminate and evaluate to false at the first instance of c for which the predicate does not hold true, skipping the other values in that stream.
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12 public int countConsistentStrings(String allowed, String[] words) {
13 Set<Character> set = allowed.chars()
14 .mapToObj(c -> (char)c)
15 .collect(Collectors.toCollection(HashSet::new));
16 return (int)Arrays.stream(words)
17 .filter(word ->
18 word.chars()
19 .mapToObj(c -> set.contains((char)c))
20 .reduce((a,b) -> a&&b)
21 .orElse(false)
22 )
23 .count();
24 }
25
In this solution, the same logic is used but instead of allMatch
, I use map
and then reduce
. Logically, after a single false
value comes from the map
stage, reduce
will always evaluate to false
. I know Java streams are lazy, but I am unsure when they ''know'' just how lazy they can be. Will this be less efficient than using allMatch
or will laziness ensure the same operation?
Lastly, in this code, we can see that the value for x
will always be 0 as after filtering for only positive numbers, the sum of them will always be positive (assume no overflow) so taking the minimum of positive numbers and a hardcoded 0 will be 0. Will the stream be lazy enough to evaluate this to 0 always, or will it work to reduce every element after the filter anyways?
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12 public int countConsistentStrings(String allowed, String[] words) {
13 Set<Character> set = allowed.chars()
14 .mapToObj(c -> (char)c)
15 .collect(Collectors.toCollection(HashSet::new));
16 return (int)Arrays.stream(words)
17 .filter(word ->
18 word.chars()
19 .mapToObj(c -> set.contains((char)c))
20 .reduce((a,b) -> a&&b)
21 .orElse(false)
22 )
23 .count();
24 }
25List<Integer> list = new ArrayList<>();
26...
27/*Some values added to list*/
28...
29int x = list.stream()
30 .filter(i -> i >= 0)
31 .reduce((a,b) -> Math.min(a+b, 0))
32 .orElse(0);
33
To summarize the above, how does one know when the Java stream will be lazy? There are lazy opportunities that I see in the code, but how can I guarantee that my code will be as lazy as possible?
ANSWER
Answered 2022-Jan-12 at 09:30The actual term you’re asking for is short-circuiting
Further, some operations are deemed short-circuiting operations. An intermediate operation is short-circuiting if, when presented with infinite input, it may produce a finite stream as a result. A terminal operation is short-circuiting if, when presented with infinite input, it may terminate in finite time. Having a short-circuiting operation in the pipeline is a necessary, but not sufficient, condition for the processing of an infinite stream to terminate normally in finite time.
The term “lazy” only applies to intermediate operations and means that they only perform work when being requested by the terminal operation. This is always the case, so when you don’t chain a terminal operation, no intermediate operation will ever process any element.
Finding out whether a terminal operation is short-circuiting, is rather easy. Go to the Stream
API documentation and check whether the particular terminal operation’s documentation contains the sentence
This is a short-circuiting terminal operation.
allMatch
has it, reduce
has not.
This does not mean that such optimizations based on logic or algebra are impossible. But the responsibility lies at the JVM’s optimizer which might do the same for loops. However, this requires inlining of all involved methods to be sure that this conditions always applies and there are no side effect which must be retained. This behavioral compatibility implies that even if the processing gets optimized away, a peek(System.out::println)
would keep printing all elements as if they were processed. In practice, you should not expect such optimizations, as the Stream implementation code is too complex for the optimizer.
QUESTION
Are any{}, all{}, and none{} lazy operations in Kotlin?
Asked 2022-Jan-12 at 01:03I am using a functional programming style to solve the Leetcode easy question, Count the Number of Consistent Strings. The premise of this question is simple: count the amount of values for which the predicate of "all values are in another set" holds.
I was able to do this pretty concisely like so:
1class Solution {
2 fun countConsistentStrings(allowed: String, words: Array<String>): Int {
3 val permitted = allowed.toSet()
4 return words.count{it.all{it in permitted}}
5 }
6}
7
I know that Java streams are lazy, but have read Kotlin is only lazy when asSequence
is used and are otherwise eager.
For reductions to a boolean based on a predicate using any
, none
, or all
, it makes the most sense to me that this should be done lazily (e.g. a single false
in all
should evaluate the whole expression to false
and stop evaluating the predicate for other elements).
Are these operations implemented this way, or are they still done eagerly like other operations in Kotlin. If so, there a way to do them lazily?
ANSWER
Answered 2022-Jan-12 at 00:03The docs don't explicitly say, but this is easy enough to test.
1class Solution {
2 fun countConsistentStrings(allowed: String, words: Array<String>): Int {
3 val permitted = allowed.toSet()
4 return words.count{it.all{it in permitted}}
5 }
6}
7class A : Iterable<String>, Iterator<String> {
8 public override fun iterator(): Iterator<String> {
9 return this
10 }
11 public override fun hasNext(): Boolean {
12 return true
13 }
14 public override fun next(): String {
15 return "test"
16 }
17}
18
19fun main(args: Array<String>) {
20 val a = A()
21 println(a.any { x -> x == "test" })
22 println(a.none { x -> x == "test" })
23 println(a.all { x -> x != "test" })
24}
25
Here, A
is a silly iterable class that just produces "test"
forever and never runs out. Then we use any
, none
, and all
to check whether it produces "test"
or not. It's an infinite iterable, so if any of these three functions wanted to try to exhaust it, the program would hang forever. But you can run this yourself, and you'll see a true
and two false
's. The program terminates. So each of those three functions stopped when it found, respectively, a match, a non-match, and a non-match.
QUESTION
Use map and zip to be more func style in 2 for loops
Asked 2021-Oct-19 at 03:58I implemented the following code to calculate weighted avg with for loops, how can I be more func programming style and use map
and zip
?
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11
Example: Given
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11val emb: Seq[Seq[Float]] = Seq(Seq(1,2,3), Seq(4,5,6))
12val weights: Seq[Float] = Seq(2, 8)
13
the output would be Seq(3.4, 4.4, 5.4)
because
(1 * 2 + 4 * 8) / (2 + 8) = 3.4
and so on.
ANSWER
Answered 2021-Oct-19 at 00:00Here is one way, although I'm not sure if it's the most elegant
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11val emb: Seq[Seq[Float]] = Seq(Seq(1,2,3), Seq(4,5,6))
12val weights: Seq[Float] = Seq(2, 8)
13val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) =>
14 emb.transpose.map((weights, _).zipped.map(_ * _).sum).map(_ / weights.sum)
15res0: Seq[Float] = List(3.4, 4.4, 5.4)
16
QUESTION
malloc a "member" of struct v.s. whole struct when struct is quite simple
Asked 2021-Sep-23 at 16:33I have searched on this site the topics about malloc
on structs. However, I have a slightly problem. Is that malloc
on the element of a struct different from malloc
on the whole struct, especially when that struct is quite simple, that is, only a member that is exactly what we all want to allocate? To be clear, see the code corresponding to student
and student2
structs below.
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26
Are they different in memory level? If yes, what is the difference? If no, which is perhaps better in terms of a good programming style?
ANSWER
Answered 2021-Sep-23 at 16:15First, you dynamically allocate one struct, but not the other. So you're comparing apples to oranges.
Statically-allocated structs:
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55
struct student
uses more memory because it has an extra value (the pointer), and it has the overhead of two memory blocks instead of one.
struct student2
always has memory for exactly three scores, even if you need fewer. And it can't possibly accommodate more than 3.
Dynamically-allocated structs:
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55struct student *john = malloc(sizeof(struct student));
56john->majorScore = malloc(sizeof(int) * 3);
57john->majorScore[0] = 50;
58john->majorScore[1] = 27;
59john->majorScore[2] = 56;
60
61struct student2 *amy = malloc(sizeof(struct student2));
62amy->majorScore[0] = 50;
63amy->majorScore[1] = 27;
64amy->majorScore[2] = 56;
65
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55struct student *john = malloc(sizeof(struct student));
56john->majorScore = malloc(sizeof(int) * 3);
57john->majorScore[0] = 50;
58john->majorScore[1] = 27;
59john->majorScore[2] = 56;
60
61struct student2 *amy = malloc(sizeof(struct student2));
62amy->majorScore[0] = 50;
63amy->majorScore[1] = 27;
64amy->majorScore[2] = 56;
65struct student *john
66+----------+ +------------+----------+ +----------+
67| ------->| majorScore | ------->| 50 |
68+----------+ +------------+----------+ +----------+
69 | [padding] | | | 27 |
70 +------------+----------+ +----------+
71 | 56 |
72 +----------+
73
74struct student2 *amy
75+----------+ +------------+----------+
76| ------->| majorScore | 50 |
77+----------+ | +----------+
78 | | 27 |
79 | +----------+
80 | | 56 |
81 +------------+----------+
82 | [padding] | |
83 +------------+----------+
84
Same analysis as above.
QUESTION
Difference between Running time and Execution time in algorithm?
Asked 2021-Aug-08 at 08:01I'm currently reading this book called CLRS 2.2 page 25. In which the author describes the Running time of an algorithm as
The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed.
Also the author uses the running time to analyze algorithms. Then I referred a book called Data Structures and Algorithms made easy by Narasimha Karumanchi. In which he describes the following.
1.7 Goal of the Analysis of Algorithms The goal of the analysis of algorithms is to compare algorithms (or solutions) mainly in terms of running time but also in terms of other factors (e.g., memory, developer effort, etc.)
1.9 How to Compare Algorithms: To compare algorithms, let us define a few objective measures:
Execution times? Not a good measure as execution times are specific to a particular computer.
Number of statements executed? Not a good measure, since the number of statements varies with the programming language as well as the style of the individual programmer.
Ideal solution? Let us assume that we express the running time of a given algorithm as a function of the input size n (i.e., f(n)) and compare these different functions corresponding to running times. This kind of comparison is independent of machine time, programming style, etc.
As you can see from CLRS the author describes the running time as the number of steps executed whereas in the second book the author says its not a good measure to use Number of step executed to analyze the algorithms. Also the running time depends on the computer (my assumption) but the author from the second book says that we cannot consider the Execution time to analyze algorithms as it totally depends on the computer.
I thought the execution time and the running time are same!
So,
thanks in advance.
ANSWER
Answered 2021-Aug-08 at 07:57What is the real meaning or definition of running time and execution time? Are they the same of different?
The definition of "running time" in 'Introduction to Algorithms' by C,L,R,S [CLRS] is actually not a time, but a number of steps. This is not what you would intuitively use as a definition. Most would agree that "runnning" and "executing" are the same concept, and that "time" is expressed in a unit of time (like milliseconds). So while we would normally consider these two terms to have the same meaning, in CLRS they have deviated from that, and gave a different meaning to "running time".
Does running time describe the number of steps executed or not?
It does mean that in CLRS. But the definition that CLRS uses for "running time" is particular, and not the same as you might encounter in other resources.
CLRS assumes here that a primitive operation (i.e. a step) takes O(1) time.
This is typically true for CPU instructions, which take up to a fixed maximum number of cycles (where each cycle represents a unit of time), but it may not be true in higher level languages. For instance, some languages have a sort
instruction. Counting that as a single "step" would give useless results in an analysis.
Breaking down an algorithm into its O(1) steps does help to analyse the complexity of an algorithm. Counting the steps for different inputs may only give a hint about the complexity though. Ultimately, the complexity of an algorithm requires a (mathematical) proof, based on the loops and the known complexity of the steps used in an algorithm.
Does running time depend on the computer or not?
Certainly the execution time may differ. This is one of the reasons we want to by a new computer once in a while.
The number of steps may depend on the computer. If both support the same programming language, and you count steps in that language, then: yes. But if you would do the counting more thoroughly and would count the CPU instructions that are actually ran by the compiled program, then it might be different. For instance, a C compiler on one computer may generate different machine code than a different C compiler on another computer, and so the number of CPU instructions may be less on the one than the other, even though they result from the same C program code.
Practically however, this counting at CPU instruction level is not relevant for determining the complexity of an algorithm. We generally know the time complexity of each instruction in the higher level language, and that is what counts for determining the overall complexity of an algorithm.
QUESTION
Lifetime of get method in postgres Rust
Asked 2021-Jun-14 at 07:09Some Background (feel free to skip):
I'm very new to Rust, I come from a Haskell background (just in case that gives you an idea of any misconceptions I might have).
I am trying to write a program which, given a bunch of inputs from a database, can create customisable reports. To do this I wanted to create a Field
datatype which is composable in a sort of DSL style. In Haskell my intuition would be to make Field
an instance of Functor
and Applicative
so that writing things like this would be possible:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28
Actual Question:
I know that it's quite awkward to implement Functor
and Applicative
traits in Rust so I just implemented the appropriate functions for Field
rather than actually defining traits (this all compiled fine). Here's a very simplified implementation of Field
in Rust, without any of the Functor
or Applicative
stuff.
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39
I can easily create a function which simply gets the value from an input field and creates a report Field
with it:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46
But when I try to make this polymorphic rather than using String
I get some really strange lifetime errors that I just don't understand:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
54 --> src/test.rs:36:16
55 |
5636 | Ok(row.get(input as usize))
57 | ^^^
58 |
59note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the body at 35:22...
60 --> src/test.rs:35:22
61 |
6235 | let f = Box::new(move |_: &Env, row: &Row| {
63 | ______________________^
6436 | | Ok(row.get(input as usize))
6537 | | });
66 | |_____^
67note: ...so that reference does not outlive borrowed content
68 --> src/test.rs:36:12
69 |
7036 | Ok(row.get(input as usize))
71 | ^^^
72note: but, the lifetime must be valid for the lifetime `'a` as defined on the function body at 34:14...
73 --> src/test.rs:34:14
74 |
7534 | fn field_bad<'a, A: FromSql<'a>>(input: FieldId) -> Field<A> {
76 | ^^
77note: ...so that the types are compatible
78 --> src/test.rs:36:16
79 |
8036 | Ok(row.get(input as usize))
81 | ^^^
82 = note: expected `FromSql<'_>`
83 found `FromSql<'a>`
84
Any help explaining what this error is actually getting at or how to potentially fix it would be much appreciated. I included the Haskell stuff so that my design intentions are clear, that way if the problem is that I'm using a programming style that doesn't really work in Rust, then that could be pointed out to me.
EDIT:
Forgot to include a link to the docs for postgres::Row::get
in case it's relevant. They can be found here.
ANSWER
Answered 2021-Jun-10 at 12:54So I seem to have fixed it, although I'm still not sure I understand exactly what I've done...
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
54 --> src/test.rs:36:16
55 |
5636 | Ok(row.get(input as usize))
57 | ^^^
58 |
59note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the body at 35:22...
60 --> src/test.rs:35:22
61 |
6235 | let f = Box::new(move |_: &Env, row: &Row| {
63 | ______________________^
6436 | | Ok(row.get(input as usize))
6537 | | });
66 | |_____^
67note: ...so that reference does not outlive borrowed content
68 --> src/test.rs:36:12
69 |
7036 | Ok(row.get(input as usize))
71 | ^^^
72note: but, the lifetime must be valid for the lifetime `'a` as defined on the function body at 34:14...
73 --> src/test.rs:34:14
74 |
7534 | fn field_bad<'a, A: FromSql<'a>>(input: FieldId) -> Field<A> {
76 | ^^
77note: ...so that the types are compatible
78 --> src/test.rs:36:16
79 |
8036 | Ok(row.get(input as usize))
81 | ^^^
82 = note: expected `FromSql<'_>`
83 found `FromSql<'a>`
84type FieldFunction<'a, A> = Box<dyn Fn(&Env, &'a Row) -> Result<A, String>>;
85
86struct Field<'a, A> {
87 field_parse: FieldFunction<'a, A>
88}
89
90fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<'a, A> {
91 let f = Box::new(move |_: &Env, row: &'a Row| {
92 Ok(row.get(input as usize))
93 });
94
95 Field { field_parse: f }
96}
97
I also swear I tried this several times before but there we are...
QUESTION
Is there a way to implement mapcar in Common Lisp using only applicative programming and avoiding recursion or iteration as programming styles?
Asked 2021-May-25 at 10:22I am trying to learn Common Lisp with the book Common Lisp: A gentle introduction to Symbolic Computation. In addition, I am using SBCL, Emacs and Slime.
In chapter 7, the author suggests there are three styles of programming the book will cover: recursion, iteration and applicative programming.
I am interested on the last one. This style is famous for the applicative operator funcall
which is the primitive responsible for other applicative operators such as mapcar
.
Thus, with an educational purpose, I decided to implement my own version of mapcar
using funcall
:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6
As you might see, I used recursion as a programming style to build an iconic applicative programming function.
It seems to work:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19
Is there a way to implement mapcar without using recursion or iteration? Using only applicative programming as a style?
Thanks.
Obs.: I tried to see how it was implemented. But it was not possible
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23
I also used Emacs M-.
to look for the documentation. However, the points below did not help me. I used this to find the files below:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23/usr/share/sbcl-source/src/code/list.lisp
24 (DEFUN MAPCAR)
25/usr/share/sbcl-source/src/compiler/seqtran.lisp
26 (:DEFINE-SOURCE-TRANSFORM MAPCAR)
27/usr/share/sbcl-source/src/compiler/fndb.lisp
28 (DECLAIM MAPCAR SB-C:DEFKNOWN)
29
ANSWER
Answered 2021-May-21 at 17:36mapcar
is by itself a primitive applicative operator (pag. 220 of Common Lisp: A gentle introduction to Symbolic Computation). So, if you want to rewrite it in an applicative way, you should use some other primitive applicative operator, for instance map
or map-into
. For instance, with map-into
:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23/usr/share/sbcl-source/src/code/list.lisp
24 (DEFUN MAPCAR)
25/usr/share/sbcl-source/src/compiler/seqtran.lisp
26 (:DEFINE-SOURCE-TRANSFORM MAPCAR)
27/usr/share/sbcl-source/src/compiler/fndb.lisp
28 (DECLAIM MAPCAR SB-C:DEFKNOWN)
29CL-USER> (defun my-mapcar (fn list &rest lists)
30 (apply #'map-into (make-list (length list)) fn list lists))
31MY-MAPCAR
32CL-USER> (my-mapcar #'1+ '(1 2 3))
33(2 3 4)
34CL-USER> (my-mapcar #'+ '(1 2 3) '(10 20 30) '(100 200 300))
35(111 222 333)
36
QUESTION
Create TKinter label using class method
Asked 2021-May-07 at 09:22I am trying use object oriented programming style to write the code for a Tkinter app. I want to use a class method to place labels(or other widgets) to the GUI. The code I wrote is adding a character which I don't expect to the GUI. How can I write the initial add_label method so that it does not add the unwanted character. Below is my code and a screenshot. I am new to OOP, so i might be missing something.
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18
I am new to OOP and stil trying to figure out how i can benefit from code reuse in this case. My app has several widgets and functions
ANSWER
Answered 2021-May-07 at 09:22What do you expect self.add_label(root)
to do? According to your method definition, it takes text
as argument, so when you say self.add_label(root)
, you are passing root
as text
. And what is root
? It is '.'
, so remove it and it'll be gone.
Though a proper way to do this will be to pass a parent
argument to the method and use that while widget creation:
And the important part is, your instantiating the class
wrong. Keep a reference to it, rather than creating a lot of instances.
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18from tkinter import *
19
20class App:
21 def __init__(self, parent):
22 self.widgets(root)
23
24 def widgets(self, app):
25 self.title = Label(app, text= 'LABEL UP').pack()
26 self.btn = Button(app, text = 'BUTTON').pack()
27
28 def add_label(self, parent, text):
29 Label(parent,text= text).pack()
30
31root = Tk()
32
33app = App(root)
34app.add_label(root, 'LABEL_1')
35app.add_label(root,'LABEL_2')
36
37root.mainloop()
38
Try not to get confused with both the mistakes.
How would I write this class? I don't know the true purpose of this, but I think you can follow something like this:
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18from tkinter import *
19
20class App:
21 def __init__(self, parent):
22 self.widgets(root)
23
24 def widgets(self, app):
25 self.title = Label(app, text= 'LABEL UP').pack()
26 self.btn = Button(app, text = 'BUTTON').pack()
27
28 def add_label(self, parent, text):
29 Label(parent,text= text).pack()
30
31root = Tk()
32
33app = App(root)
34app.add_label(root, 'LABEL_1')
35app.add_label(root,'LABEL_2')
36
37root.mainloop()
38from tkinter import *
39
40class App:
41 def __init__(self, parent):
42 self.parent = parent
43
44 self.title = Label(self.parent, text='LABEL UP')
45 self.title.pack()
46
47 self.entry = Entry(self.parent)
48 self.entry.pack()
49
50 self.btn = Button(self.parent, text='BUTTON')
51 # Compliance with PEP8
52 self.btn.config(command=lambda: self.add_label(self.entry.get()))
53 self.btn.pack()
54
55 def add_label(self, text):
56 Label(self.parent, text=text).pack()
57
58 def start(self):
59 self.parent.mainloop()
60
61root = Tk()
62
63app = App(root)
64app.start()
65
Community Discussions contain sources that include Stack Exchange Network
QUESTION
Use for loop or multiple prints?
Asked 2022-Mar-01 at 21:31What programming style should I use?
1...
2print(1)
3print(2)
4
or
1...
2print(1)
3print(2)
4...
5for i in range(1, 3):
6 print(i)
7
The output is the same 1
and on the next line 2
, but which version should I use as a Python programmer?
I mean the first version is redundant or not?
ANSWER
Answered 2022-Mar-01 at 21:31It depends.
There is an old rule "three or more, use for
". (source)
On the other hand, sometimes unrolling a loop can offer a speed-up. (But that's generally more true in C or assembly.)
You should do what makes your program more clear.
For example, in the code below, I wrote out the calculations for the ABD matrix of a fiber reinforced composite laminate, because making nested loops would make it more complex in this case;
1...
2print(1)
3print(2)
4...
5for i in range(1, 3):
6 print(i)
7 for la, z2, z3 in zip(layers, lz2, lz3):
8 # first row
9 ABD[0][0] += la.Q̅11 * la.thickness # Hyer:1998, p. 290
10 ABD[0][1] += la.Q̅12 * la.thickness
11 ABD[0][2] += la.Q̅16 * la.thickness
12 ABD[0][3] += la.Q̅11 * z2
13 ABD[0][4] += la.Q̅12 * z2
14 ABD[0][5] += la.Q̅16 * z2
15 # second row
16 ABD[1][0] += la.Q̅12 * la.thickness
17 ABD[1][1] += la.Q̅22 * la.thickness
18 ABD[1][2] += la.Q̅26 * la.thickness
19 ABD[1][3] += la.Q̅12 * z2
20 ABD[1][4] += la.Q̅22 * z2
21 ABD[1][5] += la.Q̅26 * z2
22 # third row
23 ABD[2][0] += la.Q̅16 * la.thickness
24 ABD[2][1] += la.Q̅26 * la.thickness
25 ABD[2][2] += la.Q̅66 * la.thickness
26 ABD[2][3] += la.Q̅16 * z2
27 ABD[2][4] += la.Q̅26 * z2
28 ABD[2][5] += la.Q̅66 * z2
29 # fourth row
30 ABD[3][0] += la.Q̅11 * z2
31 ABD[3][1] += la.Q̅12 * z2
32 ABD[3][2] += la.Q̅16 * z2
33 ABD[3][3] += la.Q̅11 * z3
34 ABD[3][4] += la.Q̅12 * z3
35 ABD[3][5] += la.Q̅16 * z3
36 # fifth row
37 ABD[4][0] += la.Q̅12 * z2
38 ABD[4][1] += la.Q̅22 * z2
39 ABD[4][2] += la.Q̅26 * z2
40 ABD[4][3] += la.Q̅12 * z3
41 ABD[4][4] += la.Q̅22 * z3
42 ABD[4][5] += la.Q̅26 * z3
43 # sixth row
44 ABD[5][0] += la.Q̅16 * z2
45 ABD[5][1] += la.Q̅26 * z2
46 ABD[5][2] += la.Q̅66 * z2
47 ABD[5][3] += la.Q̅16 * z3
48 ABD[5][4] += la.Q̅26 * z3
49 ABD[5][5] += la.Q̅66 * z3
50 # Calculate unit thermal stress resultants.
51 # Hyer:1998, p. 445
52 Ntx += (la.Q̅11 * la.αx + la.Q̅12 * la.αy + la.Q̅16 * la.αxy) * la.thickness
53 Nty += (la.Q̅12 * la.αx + la.Q̅22 * la.αy + la.Q̅26 * la.αxy) * la.thickness
54 Ntxy += (la.Q̅16 * la.αx + la.Q̅26 * la.αy + la.Q̅66 * la.αxy) * la.thickness
55 # Calculate H matrix (derived from Barbero:2018, p. 181)
56 sb = 5 / 4 * (la.thickness - 4 * z3 / thickness ** 2)
57 H[0][0] += la.Q̅s44 * sb
58 H[0][1] += la.Q̅s45 * sb
59 H[1][0] += la.Q̅s45 * sb
60 H[1][1] += la.Q̅s55 * sb
61 # Calculate E3
62 c3 += la.thickness / la.E3
63
QUESTION
Why doesn't the rangeCheck method in the java.util.ArrayList class check for negative index?
Asked 2022-Feb-28 at 15:321/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11
From: jdk/ArrayList.java at jdk8-b120 · openjdk/jdk · GitHub
If we write the following code, both indexes are out of bounds, but the exception types are different.
1/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11import java.util.ArrayList;
12import java.util.List;
13
14public class Test {
15
16 public static void main(String[] args) {
17 List<String> list = new ArrayList<>();
18 list.add("");
19
20 try {
21 list.get(-1);
22 } catch (Exception e) {
23 e.printStackTrace();
24 }
25
26 try {
27 list.get(1);
28 } catch (Exception e) {
29 e.printStackTrace();
30 }
31 }
32
33}
34
The output is as follows:
1/**
2 * Checks if the given index is in range. If not, throws an appropriate
3 * runtime exception. This method does *not* check if the index is
4 * negative: It is always used immediately prior to an array access,
5 * which throws an ArrayIndexOutOfBoundsException if index is negative.
6 */
7private void rangeCheck(int index) {
8 if (index >= size)
9 throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
10}
11import java.util.ArrayList;
12import java.util.List;
13
14public class Test {
15
16 public static void main(String[] args) {
17 List<String> list = new ArrayList<>();
18 list.add("");
19
20 try {
21 list.get(-1);
22 } catch (Exception e) {
23 e.printStackTrace();
24 }
25
26 try {
27 list.get(1);
28 } catch (Exception e) {
29 e.printStackTrace();
30 }
31 }
32
33}
34java.lang.ArrayIndexOutOfBoundsException: -1
35 at java.util.ArrayList.elementData(ArrayList.java:424)
36 at java.util.ArrayList.get(ArrayList.java:437)
37 at Test.main(Test.java:11)
38java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
39 at java.util.ArrayList.rangeCheck(ArrayList.java:659)
40 at java.util.ArrayList.get(ArrayList.java:435)
41 at Test.main(Test.java:17)
42
Related question:
rangeCheckForAdd
method in the java.util.ArrayList
check for negative index?java.util.Arrays.ArrayList
do index out-of-bounds checking?What confuses me is why their implementations are inconsistent? Are these methods written by different people with their own programming style? In other words, if out-of-bounds exceptions will eventually fire, then there is no need to check.
ANSWER
Answered 2022-Feb-28 at 14:23It's a micro-optimization. For code clarity you might prefer the same exception for both, but when you're in a hot loop you'll want to avoid an unnecessary operation. ArrayList
being an old class, the effect this has may have varied between times and JDK versions. If someone has enough interest they could benchmark it with 1.8 and newer JDKs to see how much of an optimization it is for get()
.
Since accessing a negative array index will fail anyway, there is no need to check for it. However the size of the ArrayList
is not always the same as the size of its internal array, so it needs to be checked explicitly.
As to why rangeCheckForAdd
does check for negative indexes, good question. Adding is slow anyway, so the micro-optimization wouldn't make much of a difference. Maybe they wanted consistent error messaging here.
QUESTION
Are java streams able to lazilly reduce from map/filter conditions?
Asked 2022-Jan-12 at 09:30I am using a functional programming style to solve the Leetcode easy question, Count the Number of Consistent Strings. The premise of this question is simple: count the amount of values for which the predicate of "all values are in another set" holds.
I have two approaches, one which I am fairly certain behaves as I want it to, and the other which I am less sure about. Both produce the correct output, but ideally they would stop evaluating other elements after the output is in a final state.
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12
In this solution, to the best of my knowledge, the allMatch statement will terminate and evaluate to false at the first instance of c for which the predicate does not hold true, skipping the other values in that stream.
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12 public int countConsistentStrings(String allowed, String[] words) {
13 Set<Character> set = allowed.chars()
14 .mapToObj(c -> (char)c)
15 .collect(Collectors.toCollection(HashSet::new));
16 return (int)Arrays.stream(words)
17 .filter(word ->
18 word.chars()
19 .mapToObj(c -> set.contains((char)c))
20 .reduce((a,b) -> a&&b)
21 .orElse(false)
22 )
23 .count();
24 }
25
In this solution, the same logic is used but instead of allMatch
, I use map
and then reduce
. Logically, after a single false
value comes from the map
stage, reduce
will always evaluate to false
. I know Java streams are lazy, but I am unsure when they ''know'' just how lazy they can be. Will this be less efficient than using allMatch
or will laziness ensure the same operation?
Lastly, in this code, we can see that the value for x
will always be 0 as after filtering for only positive numbers, the sum of them will always be positive (assume no overflow) so taking the minimum of positive numbers and a hardcoded 0 will be 0. Will the stream be lazy enough to evaluate this to 0 always, or will it work to reduce every element after the filter anyways?
1 public int countConsistentStrings(String allowed, String[] words) {
2 final Set<Character> set = allowed.chars()
3 .mapToObj(c -> (char)c)
4 .collect(Collectors.toCollection(HashSet::new));
5 return (int)Arrays.stream(words)
6 .filter(word ->
7 word.chars()
8 .allMatch(c -> set.contains((char)c))
9 )
10 .count();
11 }
12 public int countConsistentStrings(String allowed, String[] words) {
13 Set<Character> set = allowed.chars()
14 .mapToObj(c -> (char)c)
15 .collect(Collectors.toCollection(HashSet::new));
16 return (int)Arrays.stream(words)
17 .filter(word ->
18 word.chars()
19 .mapToObj(c -> set.contains((char)c))
20 .reduce((a,b) -> a&&b)
21 .orElse(false)
22 )
23 .count();
24 }
25List<Integer> list = new ArrayList<>();
26...
27/*Some values added to list*/
28...
29int x = list.stream()
30 .filter(i -> i >= 0)
31 .reduce((a,b) -> Math.min(a+b, 0))
32 .orElse(0);
33
To summarize the above, how does one know when the Java stream will be lazy? There are lazy opportunities that I see in the code, but how can I guarantee that my code will be as lazy as possible?
ANSWER
Answered 2022-Jan-12 at 09:30The actual term you’re asking for is short-circuiting
Further, some operations are deemed short-circuiting operations. An intermediate operation is short-circuiting if, when presented with infinite input, it may produce a finite stream as a result. A terminal operation is short-circuiting if, when presented with infinite input, it may terminate in finite time. Having a short-circuiting operation in the pipeline is a necessary, but not sufficient, condition for the processing of an infinite stream to terminate normally in finite time.
The term “lazy” only applies to intermediate operations and means that they only perform work when being requested by the terminal operation. This is always the case, so when you don’t chain a terminal operation, no intermediate operation will ever process any element.
Finding out whether a terminal operation is short-circuiting, is rather easy. Go to the Stream
API documentation and check whether the particular terminal operation’s documentation contains the sentence
This is a short-circuiting terminal operation.
allMatch
has it, reduce
has not.
This does not mean that such optimizations based on logic or algebra are impossible. But the responsibility lies at the JVM’s optimizer which might do the same for loops. However, this requires inlining of all involved methods to be sure that this conditions always applies and there are no side effect which must be retained. This behavioral compatibility implies that even if the processing gets optimized away, a peek(System.out::println)
would keep printing all elements as if they were processed. In practice, you should not expect such optimizations, as the Stream implementation code is too complex for the optimizer.
QUESTION
Are any{}, all{}, and none{} lazy operations in Kotlin?
Asked 2022-Jan-12 at 01:03I am using a functional programming style to solve the Leetcode easy question, Count the Number of Consistent Strings. The premise of this question is simple: count the amount of values for which the predicate of "all values are in another set" holds.
I was able to do this pretty concisely like so:
1class Solution {
2 fun countConsistentStrings(allowed: String, words: Array<String>): Int {
3 val permitted = allowed.toSet()
4 return words.count{it.all{it in permitted}}
5 }
6}
7
I know that Java streams are lazy, but have read Kotlin is only lazy when asSequence
is used and are otherwise eager.
For reductions to a boolean based on a predicate using any
, none
, or all
, it makes the most sense to me that this should be done lazily (e.g. a single false
in all
should evaluate the whole expression to false
and stop evaluating the predicate for other elements).
Are these operations implemented this way, or are they still done eagerly like other operations in Kotlin. If so, there a way to do them lazily?
ANSWER
Answered 2022-Jan-12 at 00:03The docs don't explicitly say, but this is easy enough to test.
1class Solution {
2 fun countConsistentStrings(allowed: String, words: Array<String>): Int {
3 val permitted = allowed.toSet()
4 return words.count{it.all{it in permitted}}
5 }
6}
7class A : Iterable<String>, Iterator<String> {
8 public override fun iterator(): Iterator<String> {
9 return this
10 }
11 public override fun hasNext(): Boolean {
12 return true
13 }
14 public override fun next(): String {
15 return "test"
16 }
17}
18
19fun main(args: Array<String>) {
20 val a = A()
21 println(a.any { x -> x == "test" })
22 println(a.none { x -> x == "test" })
23 println(a.all { x -> x != "test" })
24}
25
Here, A
is a silly iterable class that just produces "test"
forever and never runs out. Then we use any
, none
, and all
to check whether it produces "test"
or not. It's an infinite iterable, so if any of these three functions wanted to try to exhaust it, the program would hang forever. But you can run this yourself, and you'll see a true
and two false
's. The program terminates. So each of those three functions stopped when it found, respectively, a match, a non-match, and a non-match.
QUESTION
Use map and zip to be more func style in 2 for loops
Asked 2021-Oct-19 at 03:58I implemented the following code to calculate weighted avg with for loops, how can I be more func programming style and use map
and zip
?
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11
Example: Given
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11val emb: Seq[Seq[Float]] = Seq(Seq(1,2,3), Seq(4,5,6))
12val weights: Seq[Float] = Seq(2, 8)
13
the output would be Seq(3.4, 4.4, 5.4)
because
(1 * 2 + 4 * 8) / (2 + 8) = 3.4
and so on.
ANSWER
Answered 2021-Oct-19 at 00:00Here is one way, although I'm not sure if it's the most elegant
1val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) => {
2 val embSize = emb.head.size
3 val len = emb.size
4 (0 until embSize)
5 .map { i =>
6 (0 until len).map { j =>
7 emb(j)(i) * weights(j)
8 }.sum / weights.sum
9 }
10 }
11val emb: Seq[Seq[Float]] = Seq(Seq(1,2,3), Seq(4,5,6))
12val weights: Seq[Float] = Seq(2, 8)
13val aggAvg = (emb: Seq[Seq[Float]], weights: Seq[Float]) =>
14 emb.transpose.map((weights, _).zipped.map(_ * _).sum).map(_ / weights.sum)
15res0: Seq[Float] = List(3.4, 4.4, 5.4)
16
QUESTION
malloc a "member" of struct v.s. whole struct when struct is quite simple
Asked 2021-Sep-23 at 16:33I have searched on this site the topics about malloc
on structs. However, I have a slightly problem. Is that malloc
on the element of a struct different from malloc
on the whole struct, especially when that struct is quite simple, that is, only a member that is exactly what we all want to allocate? To be clear, see the code corresponding to student
and student2
structs below.
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26
Are they different in memory level? If yes, what is the difference? If no, which is perhaps better in terms of a good programming style?
ANSWER
Answered 2021-Sep-23 at 16:15First, you dynamically allocate one struct, but not the other. So you're comparing apples to oranges.
Statically-allocated structs:
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55
struct student
uses more memory because it has an extra value (the pointer), and it has the overhead of two memory blocks instead of one.
struct student2
always has memory for exactly three scores, even if you need fewer. And it can't possibly accommodate more than 3.
Dynamically-allocated structs:
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55struct student *john = malloc(sizeof(struct student));
56john->majorScore = malloc(sizeof(int) * 3);
57john->majorScore[0] = 50;
58john->majorScore[1] = 27;
59john->majorScore[2] = 56;
60
61struct student2 *amy = malloc(sizeof(struct student2));
62amy->majorScore[0] = 50;
63amy->majorScore[1] = 27;
64amy->majorScore[2] = 56;
65
1struct student {
2 int* majorScore;
3};
4
5struct student2 {
6 int majorScore[3];
7};
8
9
10int main()
11{
12 struct student john;
13 john.majorScore = (int*) malloc(sizeof(int) * 3);
14 john.majorScore[0] = 50;
15 john.majorScore[1] = 27;
16 john.majorScore[2] = 56;
17
18 struct student2* amy= (struct student2*)malloc(sizeof(struct student2));
19 amy->majorScore[0] = 50;
20 amy->majorScore[1] = 27;
21 amy->majorScore[2] = 56;
22
23
24 return 0;
25}
26struct student john;
27john.majorScore = malloc(sizeof(int) * 3);
28john.majorScore[0] = 50;
29john.majorScore[1] = 27;
30john.majorScore[2] = 56;
31
32struct student2 amy;
33amy.majorScore[0] = 50;
34amy.majorScore[1] = 27;
35amy.majorScore[2] = 56;
36struct student john
37+------------+----------+ +----------+
38| majorScore | ------->| 50 |
39+------------+----------+ +----------+
40| [padding] | | | 27 |
41+------------+----------+ +----------+
42 | 56 |
43 +----------+
44
45struct student2 amy
46+------------+----------+
47| majorScore | 50 |
48| +----------+
49| | 27 |
50| +----------+
51| | 56 |
52+------------+----------+
53| [padding] | |
54+------------+----------+
55struct student *john = malloc(sizeof(struct student));
56john->majorScore = malloc(sizeof(int) * 3);
57john->majorScore[0] = 50;
58john->majorScore[1] = 27;
59john->majorScore[2] = 56;
60
61struct student2 *amy = malloc(sizeof(struct student2));
62amy->majorScore[0] = 50;
63amy->majorScore[1] = 27;
64amy->majorScore[2] = 56;
65struct student *john
66+----------+ +------------+----------+ +----------+
67| ------->| majorScore | ------->| 50 |
68+----------+ +------------+----------+ +----------+
69 | [padding] | | | 27 |
70 +------------+----------+ +----------+
71 | 56 |
72 +----------+
73
74struct student2 *amy
75+----------+ +------------+----------+
76| ------->| majorScore | 50 |
77+----------+ | +----------+
78 | | 27 |
79 | +----------+
80 | | 56 |
81 +------------+----------+
82 | [padding] | |
83 +------------+----------+
84
Same analysis as above.
QUESTION
Difference between Running time and Execution time in algorithm?
Asked 2021-Aug-08 at 08:01I'm currently reading this book called CLRS 2.2 page 25. In which the author describes the Running time of an algorithm as
The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed.
Also the author uses the running time to analyze algorithms. Then I referred a book called Data Structures and Algorithms made easy by Narasimha Karumanchi. In which he describes the following.
1.7 Goal of the Analysis of Algorithms The goal of the analysis of algorithms is to compare algorithms (or solutions) mainly in terms of running time but also in terms of other factors (e.g., memory, developer effort, etc.)
1.9 How to Compare Algorithms: To compare algorithms, let us define a few objective measures:
Execution times? Not a good measure as execution times are specific to a particular computer.
Number of statements executed? Not a good measure, since the number of statements varies with the programming language as well as the style of the individual programmer.
Ideal solution? Let us assume that we express the running time of a given algorithm as a function of the input size n (i.e., f(n)) and compare these different functions corresponding to running times. This kind of comparison is independent of machine time, programming style, etc.
As you can see from CLRS the author describes the running time as the number of steps executed whereas in the second book the author says its not a good measure to use Number of step executed to analyze the algorithms. Also the running time depends on the computer (my assumption) but the author from the second book says that we cannot consider the Execution time to analyze algorithms as it totally depends on the computer.
I thought the execution time and the running time are same!
So,
thanks in advance.
ANSWER
Answered 2021-Aug-08 at 07:57What is the real meaning or definition of running time and execution time? Are they the same of different?
The definition of "running time" in 'Introduction to Algorithms' by C,L,R,S [CLRS] is actually not a time, but a number of steps. This is not what you would intuitively use as a definition. Most would agree that "runnning" and "executing" are the same concept, and that "time" is expressed in a unit of time (like milliseconds). So while we would normally consider these two terms to have the same meaning, in CLRS they have deviated from that, and gave a different meaning to "running time".
Does running time describe the number of steps executed or not?
It does mean that in CLRS. But the definition that CLRS uses for "running time" is particular, and not the same as you might encounter in other resources.
CLRS assumes here that a primitive operation (i.e. a step) takes O(1) time.
This is typically true for CPU instructions, which take up to a fixed maximum number of cycles (where each cycle represents a unit of time), but it may not be true in higher level languages. For instance, some languages have a sort
instruction. Counting that as a single "step" would give useless results in an analysis.
Breaking down an algorithm into its O(1) steps does help to analyse the complexity of an algorithm. Counting the steps for different inputs may only give a hint about the complexity though. Ultimately, the complexity of an algorithm requires a (mathematical) proof, based on the loops and the known complexity of the steps used in an algorithm.
Does running time depend on the computer or not?
Certainly the execution time may differ. This is one of the reasons we want to by a new computer once in a while.
The number of steps may depend on the computer. If both support the same programming language, and you count steps in that language, then: yes. But if you would do the counting more thoroughly and would count the CPU instructions that are actually ran by the compiled program, then it might be different. For instance, a C compiler on one computer may generate different machine code than a different C compiler on another computer, and so the number of CPU instructions may be less on the one than the other, even though they result from the same C program code.
Practically however, this counting at CPU instruction level is not relevant for determining the complexity of an algorithm. We generally know the time complexity of each instruction in the higher level language, and that is what counts for determining the overall complexity of an algorithm.
QUESTION
Lifetime of get method in postgres Rust
Asked 2021-Jun-14 at 07:09Some Background (feel free to skip):
I'm very new to Rust, I come from a Haskell background (just in case that gives you an idea of any misconceptions I might have).
I am trying to write a program which, given a bunch of inputs from a database, can create customisable reports. To do this I wanted to create a Field
datatype which is composable in a sort of DSL style. In Haskell my intuition would be to make Field
an instance of Functor
and Applicative
so that writing things like this would be possible:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28
Actual Question:
I know that it's quite awkward to implement Functor
and Applicative
traits in Rust so I just implemented the appropriate functions for Field
rather than actually defining traits (this all compiled fine). Here's a very simplified implementation of Field
in Rust, without any of the Functor
or Applicative
stuff.
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39
I can easily create a function which simply gets the value from an input field and creates a report Field
with it:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46
But when I try to make this polymorphic rather than using String
I get some really strange lifetime errors that I just don't understand:
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
54 --> src/test.rs:36:16
55 |
5636 | Ok(row.get(input as usize))
57 | ^^^
58 |
59note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the body at 35:22...
60 --> src/test.rs:35:22
61 |
6235 | let f = Box::new(move |_: &Env, row: &Row| {
63 | ______________________^
6436 | | Ok(row.get(input as usize))
6537 | | });
66 | |_____^
67note: ...so that reference does not outlive borrowed content
68 --> src/test.rs:36:12
69 |
7036 | Ok(row.get(input as usize))
71 | ^^^
72note: but, the lifetime must be valid for the lifetime `'a` as defined on the function body at 34:14...
73 --> src/test.rs:34:14
74 |
7534 | fn field_bad<'a, A: FromSql<'a>>(input: FieldId) -> Field<A> {
76 | ^^
77note: ...so that the types are compatible
78 --> src/test.rs:36:16
79 |
8036 | Ok(row.get(input as usize))
81 | ^^^
82 = note: expected `FromSql<'_>`
83 found `FromSql<'a>`
84
Any help explaining what this error is actually getting at or how to potentially fix it would be much appreciated. I included the Haskell stuff so that my design intentions are clear, that way if the problem is that I'm using a programming style that doesn't really work in Rust, then that could be pointed out to me.
EDIT:
Forgot to include a link to the docs for postgres::Row::get
in case it's relevant. They can be found here.
ANSWER
Answered 2021-Jun-10 at 12:54So I seem to have fixed it, although I'm still not sure I understand exactly what I've done...
1type Env = [String]
2type Row = [String]
3
4data Field a = Field
5 { fieldParse :: Env -> Row -> a }
6
7instance Functor Field where
8 fmap f a = Field $
9 \env row -> f $ fieldParse a env row
10
11instance Applicative Field where
12 pure = Field . const . const
13 fa <*> fb = Field $
14 \env row -> (fieldParse fa) env row
15 $ (fieldParse fb) env row
16
17oneField :: Field Int
18oneField = pure 1
19
20twoField :: Field Int
21twoField = fmap (*2) oneField
22
23tripleField :: Field (Int -> Int)
24tripleField = pure (*3)
25
26threeField :: Field Int
27threeField = tripleField <*> oneField
28use std::result;
29use postgres::Row;
30use postgres::types::FromSql;
31
32type Env = Vec<String>;
33
34type FieldFunction<A> = Box<dyn Fn(&Env, &Row) -> Result<A, String>>;
35
36struct Field<A> {
37 field_parse: FieldFunction<A>
38}
39fn field_good(input: u32) -> Field<String> {
40 let f = Box::new(move |_: &Env, row: &Row| {
41 Ok(row.get(input as usize))
42 });
43
44 Field { field_parse: f }
45}
46fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<A> {
47 let f = Box::new(move |_: &Env, row: &Row| {
48 Ok(row.get(input as usize))
49 });
50
51 Field { field_parse: f }
52}
53error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
54 --> src/test.rs:36:16
55 |
5636 | Ok(row.get(input as usize))
57 | ^^^
58 |
59note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the body at 35:22...
60 --> src/test.rs:35:22
61 |
6235 | let f = Box::new(move |_: &Env, row: &Row| {
63 | ______________________^
6436 | | Ok(row.get(input as usize))
6537 | | });
66 | |_____^
67note: ...so that reference does not outlive borrowed content
68 --> src/test.rs:36:12
69 |
7036 | Ok(row.get(input as usize))
71 | ^^^
72note: but, the lifetime must be valid for the lifetime `'a` as defined on the function body at 34:14...
73 --> src/test.rs:34:14
74 |
7534 | fn field_bad<'a, A: FromSql<'a>>(input: FieldId) -> Field<A> {
76 | ^^
77note: ...so that the types are compatible
78 --> src/test.rs:36:16
79 |
8036 | Ok(row.get(input as usize))
81 | ^^^
82 = note: expected `FromSql<'_>`
83 found `FromSql<'a>`
84type FieldFunction<'a, A> = Box<dyn Fn(&Env, &'a Row) -> Result<A, String>>;
85
86struct Field<'a, A> {
87 field_parse: FieldFunction<'a, A>
88}
89
90fn field_bad<'a, A: FromSql<'a>>(input: u32) -> Field<'a, A> {
91 let f = Box::new(move |_: &Env, row: &'a Row| {
92 Ok(row.get(input as usize))
93 });
94
95 Field { field_parse: f }
96}
97
I also swear I tried this several times before but there we are...
QUESTION
Is there a way to implement mapcar in Common Lisp using only applicative programming and avoiding recursion or iteration as programming styles?
Asked 2021-May-25 at 10:22I am trying to learn Common Lisp with the book Common Lisp: A gentle introduction to Symbolic Computation. In addition, I am using SBCL, Emacs and Slime.
In chapter 7, the author suggests there are three styles of programming the book will cover: recursion, iteration and applicative programming.
I am interested on the last one. This style is famous for the applicative operator funcall
which is the primitive responsible for other applicative operators such as mapcar
.
Thus, with an educational purpose, I decided to implement my own version of mapcar
using funcall
:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6
As you might see, I used recursion as a programming style to build an iconic applicative programming function.
It seems to work:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19
Is there a way to implement mapcar without using recursion or iteration? Using only applicative programming as a style?
Thanks.
Obs.: I tried to see how it was implemented. But it was not possible
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23
I also used Emacs M-.
to look for the documentation. However, the points below did not help me. I used this to find the files below:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23/usr/share/sbcl-source/src/code/list.lisp
24 (DEFUN MAPCAR)
25/usr/share/sbcl-source/src/compiler/seqtran.lisp
26 (:DEFINE-SOURCE-TRANSFORM MAPCAR)
27/usr/share/sbcl-source/src/compiler/fndb.lisp
28 (DECLAIM MAPCAR SB-C:DEFKNOWN)
29
ANSWER
Answered 2021-May-21 at 17:36mapcar
is by itself a primitive applicative operator (pag. 220 of Common Lisp: A gentle introduction to Symbolic Computation). So, if you want to rewrite it in an applicative way, you should use some other primitive applicative operator, for instance map
or map-into
. For instance, with map-into
:
1(defun my-mapcar (fn xs)
2 (if (null xs)
3 nil
4 (cons (funcall fn (car xs))
5 (my-mapcar fn (cdr xs)))))
6CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
7(2 3 4 5)
8
9CL-USER> (my-mapcar (lambda (n) (+ n 1)) (list ))
10NIL
11
12;; comparing the results with the official one
13
14CL-USER> (mapcar (lambda (n) (+ n 1)) (list ))
15NIL
16
17CL-USER> (mapcar (lambda (n) (+ n 1)) (list 1 2 3 4))
18(2 3 4 5)
19CL-USER> (function-lambda-expression #'mapcar)
20NIL
21T
22MAPCAR
23/usr/share/sbcl-source/src/code/list.lisp
24 (DEFUN MAPCAR)
25/usr/share/sbcl-source/src/compiler/seqtran.lisp
26 (:DEFINE-SOURCE-TRANSFORM MAPCAR)
27/usr/share/sbcl-source/src/compiler/fndb.lisp
28 (DECLAIM MAPCAR SB-C:DEFKNOWN)
29CL-USER> (defun my-mapcar (fn list &rest lists)
30 (apply #'map-into (make-list (length list)) fn list lists))
31MY-MAPCAR
32CL-USER> (my-mapcar #'1+ '(1 2 3))
33(2 3 4)
34CL-USER> (my-mapcar #'+ '(1 2 3) '(10 20 30) '(100 200 300))
35(111 222 333)
36
QUESTION
Create TKinter label using class method
Asked 2021-May-07 at 09:22I am trying use object oriented programming style to write the code for a Tkinter app. I want to use a class method to place labels(or other widgets) to the GUI. The code I wrote is adding a character which I don't expect to the GUI. How can I write the initial add_label method so that it does not add the unwanted character. Below is my code and a screenshot. I am new to OOP, so i might be missing something.
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18
I am new to OOP and stil trying to figure out how i can benefit from code reuse in this case. My app has several widgets and functions
ANSWER
Answered 2021-May-07 at 09:22What do you expect self.add_label(root)
to do? According to your method definition, it takes text
as argument, so when you say self.add_label(root)
, you are passing root
as text
. And what is root
? It is '.'
, so remove it and it'll be gone.
Though a proper way to do this will be to pass a parent
argument to the method and use that while widget creation:
And the important part is, your instantiating the class
wrong. Keep a reference to it, rather than creating a lot of instances.
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18from tkinter import *
19
20class App:
21 def __init__(self, parent):
22 self.widgets(root)
23
24 def widgets(self, app):
25 self.title = Label(app, text= 'LABEL UP').pack()
26 self.btn = Button(app, text = 'BUTTON').pack()
27
28 def add_label(self, parent, text):
29 Label(parent,text= text).pack()
30
31root = Tk()
32
33app = App(root)
34app.add_label(root, 'LABEL_1')
35app.add_label(root,'LABEL_2')
36
37root.mainloop()
38
Try not to get confused with both the mistakes.
How would I write this class? I don't know the true purpose of this, but I think you can follow something like this:
1from tkinter import *
2class App:
3 def __init__(self, parent):
4 self.widgets(root)
5 self.add_label(root)
6
7 def widgets(self, app):
8 self.title = Label(app, text= 'LABEL UP').pack()
9 self.btn = Button(app, text = 'BUTTON').pack()
10 def add_label(self, text):
11 Label(text= text).pack()
12
13root = Tk()
14App(root)
15App.add_label(root, 'LABEL_1')
16App.add_label(root,'LABEL_2')
17root.mainloop()
18from tkinter import *
19
20class App:
21 def __init__(self, parent):
22 self.widgets(root)
23
24 def widgets(self, app):
25 self.title = Label(app, text= 'LABEL UP').pack()
26 self.btn = Button(app, text = 'BUTTON').pack()
27
28 def add_label(self, parent, text):
29 Label(parent,text= text).pack()
30
31root = Tk()
32
33app = App(root)
34app.add_label(root, 'LABEL_1')
35app.add_label(root,'LABEL_2')
36
37root.mainloop()
38from tkinter import *
39
40class App:
41 def __init__(self, parent):
42 self.parent = parent
43
44 self.title = Label(self.parent, text='LABEL UP')
45 self.title.pack()
46
47 self.entry = Entry(self.parent)
48 self.entry.pack()
49
50 self.btn = Button(self.parent, text='BUTTON')
51 # Compliance with PEP8
52 self.btn.config(command=lambda: self.add_label(self.entry.get()))
53 self.btn.pack()
54
55 def add_label(self, text):
56 Label(self.parent, text=text).pack()
57
58 def start(self):
59 self.parent.mainloop()
60
61root = Tk()
62
63app = App(root)
64app.start()
65
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources are not available at this moment for Programming Style
Share this Page
Get latest updates on Programming Style
Open Weaver – Develop Applications Faster with Open Source