IDAPythonEmbeddedToolkit | IDA Python Embedded Toolkit -- IDAPython scripts
kandi X-RAY | IDAPythonEmbeddedToolkit Summary
Support
Quality
Security
License
Reuse
- Extracts all functions defined in the binary
- Create the indices for the jni envs
- Match BLX instruction
- Removes the comment from the given disasm
- Run test
- Test for BLX
- Test if LNR is passed in
- Test if LDR 1 is passed in
IDAPythonEmbeddedToolkit Key Features
IDAPythonEmbeddedToolkit Examples and Code Snippets
Trending Discussions on Embedded System
Trending Discussions on Embedded System
QUESTION
I'm currently writing some code for embedded systems (both in c and c++) and in trying to minimize memory use I've noticed that I used a lot of code that relies on integer promotions. For example (to my knowledge this code is identical in c and c++):
uint8_t brightness = 40;
uint8_t maxval = 255;
uint8_t localoutput = (brightness * maxval) / 100;
So even though brightness * 255 is larger than what can be stored in an uint8_t, this still yields the correct result due to, if I'm correct, integer promotions. Brightness is a percentage so it should never be higher than 100 and therefore localoutput should never be higher than 255. My question is then whether or not any unexpected behaviour (such as brightness * maxval being larger than 255 therefore having overflow) or any significant differences between how this syntax is handled between c++ and c are the case. It seems to just output the correct answer, or would be more recommended to have the variables be of type uint16_t as the intermediate calculations may be higher than 255, and just take the memory loss for granted.
ANSWER
Answered 2022-Mar-31 at 19:52Your question raises an important issue in C programming and in programming in general: does the program behave as expected in all cases?
The expression (brightness * maxval) / 100
computes an intermediary value brightness * maxval
that may exceed the range of the type used to compute it. In Python and some other languages, this is not an issue because integers do not have a restricted range, but in C, C++, java, javascript and many other languages, integer types have a fixed number of bits so the multiplication can exceed this range.
It is the programmer's responsibility to ascertain that the range of the operands ensures that the multiplication does not overflow. This requires a good understanding of the integer promotion and conversion rules, which vary from one language to another and are somewhat tricky in C, especially with operands mixing signed and unsigned types.
In your particular case, both brightness
and maxval
have a type smaller than int
so they are promoted to int
with the same value and the multiplication produces an int
value. If brightness
is a percentage in the range 0
to 100
, the result is in the range 0
to 25500
, which the C Standard guarantees to be in the range of type int
, and dividing this number by 100
produces a value in the range 0
to 100
, in the range of int
, and also in the range of the destination type uint8_t
, so the operation is fully defined.
Whether this process should be documented in a comment or verified with debugging assertions is a matter of local coding rules. Changing the order of the operands to maxval * brightness / 100
and possibly using more explicit values and variable names might help the reader:
uint8_t brightness100 = 40;
uint8_t localoutput = 255 * brightness100 / 100;
The problem is more general than just a question of integer promotions, all such computations should be analyzed for corner cases and value ranges. Automated tools can help perform range analysis and optimizing compilers do it to improve code generation, but it is a difficult problem.
QUESTION
Coming from C/C++ background, I am aware of coding standards that apply for Safety Critical applications (like the classic trio Medical-Automotive-Aerospace) in the context of embedded systems , such as MISRA, SEI CERT, Barr etc.
Skipping the question if it should or if it is applicable as a language, I want to create Python applications for embedded systems that -even vaguely- follow some safety standard, but couldn't find any by searching, except from generic Python coding standards (like PEP8)
Is there a Python coding guideline that specificallly apply to safety-critical systems ?
ANSWER
Answered 2022-Feb-02 at 08:46Top layer safety standards for "functional safety" like IEC 61508 (industrial), ISO 26262 (automotive) or DO-178 (aerospace) etc come with a software part (for example IEC 61508-3), where they list a number of suitable programming languages. These are exclusively old languages proven in use for a long time, where all flaws and poorly-defined behavior is regarded as well-known and execution can be regarded as predictable.
In practice, for the highest safety levels it means that you are pretty much restricted to C with safe subset (MISRA C) or Ada with safe subset (SPARK). A bunch of other old languages like Modula-2, Pascal and Fortran are also mentioned, but the tool support for these in the context of modern safety MCUs is non-existent. As is support for Python for such MCUs.
Languages like Python and C++ are not even mentioned for the lowest safety levels, so between the lines they are dismissed as entirely unsuitable. Even less so than pure assembler, which is actually mentioned as something that may used for the lower safety levels.
QUESTION
I've been developing in C using eclipse as my IDE in my virtual machine with ubuntu, I've made some progress and I wanted to test them in the real product which is an embedded system using powerpc.
In order to compile that program for our product I use Code::Blocks in Windows but the compiler is a powerpc version of the gcc.
The same code is giving me an error in the powerpc version that doesn't appear in the ubuntu version.
I have two header files gral.h and module_hand.h as follows:
The gral.h file:
#ifndef HEADERS_GRAL_H_
#define HEADERS_GRAL_H_
#include "module_hand.h"
typedef struct PROFILE
{
module_t mod; // this one comes from module_hand.h
int var1; // some other random variables
} profile_t;
#endif /* HEADERS_GRAL_H_ */
The module_hand.h is defined as follows
#ifndef HEADERS_MODULE_HAND_H_
#define HEADERS_MODULE_HAND_H_
#include
#include "gral.h"
typedef struct PROFILE profile_t;
typedef struct module
{
char name[30]; // name of module
char rev[30]; // module revision
char mfr[30]; // manufacturer
} module_t;
int Mod_Init(profile_t *profile);
/* some other random functions */
#endif /* HEADERS_MODULE_HAND_H_*/
As you'll see, I don't use the PROFILE struct
in the module struct
, But I declare it forward to use it in the declaration of the Mod_Init
function
This gives me a Error: redefinition of typedef 'profile_t'
and error: previous declaration of 'profile_t' was here
If I remove the forward declaration the error is Error: parse error before '*' token
where the line number is the line of the function declaration.
My doubt is what am I missing, and why gcc in Ubuntu does compile it with no problem.
ANSWER
Answered 2022-Mar-17 at 18:30In the gral.h header file, you define profile_t
using typedef
, then you redefine profile_t
with another typedef
in module_hand.h. You should just define the struct PROFILE
in gral_h and include gral.h in module_hand.h.
gral.h:
#ifndef HEADERS_GRAL_H_
#define HEADERS_GRAL_H_
#include "module_hand.h"
typedef struct PROFILE {
module_t mod; // this one comes from module_hand.h
int var1; // some other random variables
} profile_t;
#endif /* HEADERS_GRAL_H_ */:
module_hand.h:
#ifndef HEADERS_MODULE_HAND_H_
#define HEADERS_MODULE_HAND_H_
#include
typedef struct module
{
char name[30]; // name of module
char rev[30]; // module revision
char mfr[30]; // manufacturer
} module_t;
int Mod_Init(struct PROFILE *profile);
/* some other random functions */
#endif /* HEADERS_MODULE_HAND_H_*/
QUESTION
I'm working on some Linux embedded system at the moment and using Yocto to build Linux distribution for a board.
I've followed Yocto build flow:
- download layers sources
- build image
- flash image into the board or generate SDK.
Everything works great. However I was required to add some changes to local.conf, probably add some *.bbapend files, systemd services and so forth. So, I'm wondering how can save that local changes in case if I'll want to setup a new build machine or current one will be corrupted.
Should I create a custom image or layer that will inherit everything from a board manufacturer one and add changes and functionalities that are needed to me? Or something else?
ANSWER
Answered 2022-Mar-11 at 08:27Generally when working on a custom project with Yocto, here is what possibly you will need:
First of all, you need to create your custom layer
bitbake-layers create-layer meta-custom
and add it:
bitbake-layers add-layer
After that, here are some ideas:
Official recipes modification:
When you have to modify some official recipe that exist in other official layer, you need to create a .bbappend
file into your custom layer and make your changes there.
meta-official/recipes-example/example/example_1.0.bb
your modifications must be made under:
meta-custom/recipes-example/example/example_1.0.bbappend
or to match all versions of that recipe:
meta-custom/recipes-example/example/example_%.bbappend
Distro modification:
If you changed DISTRO_FEATURES
in local.conf
you may need to create a new distro in your new custom layer:
meta-custom/conf/distro/custom-distro.conf
in custom-distro.conf:
include
orrequire
your current used distro- Add your custom configuration
DISTRO_FEATURES
Then, when creating new build, set (in local.conf
):
DISTRO = "custom-distro"
Examples for distro changes:
- Select the init manager:
INIT_MANAGER = "systemd"
for example. - Add some distro features
- Set some preferred recipes versions
PREFERRED_VERSION_recipe = "x"
- Set some preferred providers
PREFERRED_PROVIDER_virtual/xx = "x"
Machine modification:
If your board presents a permanent hardware components that, by default, are not activated in Yocto, then I suggest to create a new custom machine as well:
meta-custom/conf/machine/custom-machine.conf
In that, include
or require
your current machine configuration file and you may:
- Select your preferred virtual/kernel provider
- Select your preferred virtual/bootloader provider
- Select your custom kernel and bootloader device tree files
- etc.
and then, set it (in local.conf
):
MACHINE = "custom-machine"
Image modification:
This is the most probable modification one may have, which is adding some packages to the image with IMAGE_INSTALL
, so you may need to create a custom image:
meta-custom/recipes-core/images/custom-image.bb
in that require
or include
other image and:
- Add packages with
IMAGE_INSTALL
NOTES
If you have
bbappend
that append to an officialbbappend
then you consider making your layer more priority to the official one inmeta-custom/conf/layer.conf
If your new custom layer depends on your manufacturer layer than you may consider making it depends on it in the layer conf file:
LAYERDEPENDS_meta-custom = "meta-official"
- I recommend using
kas
which you can setup an automatic layers configuration with your custom layer and create the build automatically, this is also useful for DevOps pipelines automation.
This is what I can think of right now :))
EDIT
You can then create a custom repository for your custom layer.
If you are using repo
for your manufacturer provided initialization, then you can use this idea:
You can customize the manufacturer's manifest file to add your new custom repository, like the following:
Add remote
block for your custom git server
If your custom layer is under the git server directly remove group
or set it if it is the case.
Then, add your custom layer as a project
:
You can check for more repo
details here.
Finally, you just use repo
with your custom repository/manifest:
repo init -u -b -m custom-project.xml
repo sync
QUESTION
I have a few large static arrays that are used in a resource constrained embedded system (small microcontroller, bare metal). These are occasionally added to over the course of the project, but all follow that same mathematical formula for population. I could just make a Python script to generate a new header with the needed arrays before compilation, but it would be nicer to have it happen in the pre-processor like you might do with template meta-programming in C++. Is there any relatively easy way to do this in C? I've seen ways to get control structures like while
loops using just the pre-processor, but that seems a bit unnatural to me.
Here is an example of once such map, an approximation to arctan
, in Python, where the parameter a
is used to determine the length and values of the array, and is currently run at a variety of values from about 100 to about 2^14:
def make_array(a):
output = []
for x in range(a):
val = round(a * ((x / a)**2 / (2 * (x / a)**2 - 2 * (x / a) + 1)))
output.append(val)
return output
ANSWER
Answered 2022-Mar-08 at 22:33Is there any relatively easy way to do this in C?
No.
Stick to a Python script and incorporate it inside your build system. It is normal to generate C code using other scripts. This will be strongly relatively easier than a million lines of C code.
Take a look at M4 or Jinja2 (or PHP) - these macro processors allow sharing code with C source in the same file.
QUESTION
I'm trying to build libc6 with a custom prefix by modifying the prefix=/usr
line in debian/rules
. However, this fails because the patch is applied multiple times. Curiously, patching another file does not result in the same error. I've distilled the failure down to this script:
#!/bin/bash
set -exuo pipefail
work_dir=$(mktemp -d)
cd "$work_dir"
apt-get source libc6
cd eglibc-2.19
cat <<'PATCH' > ../set-prefix.diff
Index: eglibc-2.19/debian/rules
===================================================================
--- eglibc-2.19.orig/debian/rules>2022-03-01 17:27:53.068299816 -0800
+++ eglibc-2.19/debian/rules>-2022-03-01 17:27:53.068299816 -0800
@@ -85,7 +85,7 @@
# Default setup
EGLIBC_PASSES ?= libc
-prefix=/usr
+prefix=/new/prefix
bindir=$(prefix)/bin
datadir=$(prefix)/share
localedir=$(prefix)/lib/locale
PATCH
cat <<'PATCH' > ../change-readme.diff
Index: eglibc-2.19/README
===================================================================
--- eglibc-2.19.orig/README 2013-10-18 14:33:25.000000000 -0700
+++ eglibc-2.19/README 2022-03-02 17:00:49.954759733 -0800
@@ -1,5 +1,7 @@
This directory contains the Embedded GNU C Library (EGLIBC).
+Add a line.
+
EGLIBC is a variant of the GNU C Library (GLIBC) that is designed to
work well on embedded systems. EGLIBC strives to be source and binary
compatible with GLIBC. EGLIBC's goals include reduced footprint,
PATCH
quilt import ../change-readme.diff -P any/change-readme.diff
quilt push
quilt import ../set-prefix.diff -P any/set-prefix.diff
quilt push
dpkg-buildpackage -us -uc -S -ai386
Here's the relevant output I'm seeing:
dpkg-source -b eglibc-2.19
dpkg-source: info: using options from eglibc-2.19/debian/source/options: --compression=xz
dpkg-source: info: using source format `3.0 (quilt)'
dpkg-source: info: building eglibc using existing ./eglibc_2.19.orig.tar.xz
patching file debian/rules
Reversed (or previously applied) patch detected! Skipping patch.
1 out of 1 hunk ignored
dpkg-source: info: fuzz is not allowed when applying patches
dpkg-source: info: if patch 'any/set-prefix.diff' is correctly applied by quilt, use 'quilt refresh' to update it
dpkg-source: error: LC_ALL=C patch -t -F 0 -N -p1 -u -V never -g0 -E -b -B .pc/any/set-prefix.diff/ --reject-file=- < eglibc-2.19.orig.yzNU0V/debian/patches/any/set-prefix.diff gave error exit status 1
dpkg-buildpackage: error: dpkg-source -b eglibc-2.19 gave error exit status 2
Commenting out the quilt import
and quilt push
commands for set-prefix.diff
results in success but of course the prefix isn't updated like I want. It seems like patches are getting applied multiple times which is fine for most files but not debian/rules
- maybe these patches are applied on a fresh source directory but the debian/
directory is left in tact?
What is the recommended way to build libc6 with a custom prefix without having dpkg-buildpackage
/dpkg-source
fail due to reapplying patches?
ANSWER
Answered 2022-Mar-07 at 18:33The debian/rules
directory is special [citation needed] and shouldn't be patched using the usual quilt
commands. You can modify them directly before building the package or use the patch
command (patch -p1
in this case).
QUESTION
This is a time table, columns=hour, rows=weekday, data=subject [weekday x hour]
1 2 3 4 5 6 7
Name
Monday Project Project Project Data Science Embedded Systems Data Mining Industrial Psychology
Tuesday Project Project Project Project Data Science Industrial Psychology Embedded Systems
Wednesday Data Science Project Project Project Project Project Project
Thursday Data Mining Industrial Psychology Embedded Systems Data Mining Project Project Project
Friday Industrial Psychology Embedded Systems Data Science Data Mining Project Project Project
How do you generate a pandas.Dataframe
where, rows=weekday, columns=subject, data = subject frequency in the corresponding weekday?
Required table: [weekday x subject]
Data Mining, Data Science, Embedded Systems, Industrial Psychology, Project
Name
Monday 1 1 1 1 3
Tuesday ...
Wednesday
Thursday
Friday
self.file = 'timetable.csv'
self.sdf = pd.read_csv(self.file, header=0, index_col="Name")
print(self.sdf.to_string())
self.subject_frequency = self.sdf.apply(pd.value_counts)
print(self.subject_frequency.to_string())
self.subject_frequency["sum"] = self.subject_frequency.sum(axis=1)
ANSWER
Answered 2022-Mar-05 at 16:06Use melt
to flatten your dataframe then pivot_table
to reshape your dataframe:
out = (
df.melt(var_name='Freq', value_name='Data', ignore_index=False).assign(variable=1)
.pivot_table('Freq', 'Name', 'Data', fill_value=0, aggfunc='count')
.loc[df.index] # sort by original index: Monday > Thuesday > ...
)
Output:
>>> out
Data Data Mining Data Science Embedded Systems Industrial Psychology Project
Name
Monday 1 1 1 1 3
Tuesday 0 1 1 1 4
Wednesday 0 1 0 0 6
Thursday 2 0 1 1 3
Friday 1 1 1 1 3
QUESTION
I need to run numpy in an embedded system that has an ARM SoC, so I cross-compiled Python 3.8.10 and Numpy using arm-linux-gnueabihf-gcc. Then I copied both executables and libraries to the embedded system. But when I try to import numpy I get the following error:
>>> import numpy
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.8/dist-packages/numpy/__init__.py", line 142, in
from . import core
File "/usr/lib/python3.8/dist-packages/numpy/core/__init__.py", line 17, in
from . import multiarray
File "/usr/lib/python3.8/dist-packages/numpy/core/multiarray.py", line 14, in
from . import overrides
File "/usr/lib/python3.8/dist-packages/numpy/core/overrides.py", line 7, in
from numpy.core._multiarray_umath import (
AttributeError: module 'datetime' has no attribute 'datetime_CAPI'
So I checked the attributes of datetime:
>>> import datetime as dt
>>> dir(dt)
['MAXYEAR', 'MINYEAR', '_DAYNAMES', '_DAYS_BEFORE_MONTH', '_DAYS_IN_MONTH', '_DI100Y', '_DI400Y', '_DI4Y',
'_EPOCH', '_MAXORDINAL', '_MONTHNAMES', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__spec__', '_build_struct_time', '_check_date_fields', '_check_int_field', '
_check_time_fields', '_check_tzinfo_arg', '_check_tzname', '_check_utc_offset', '_cmp', '_cmperror',
'_date_class', '_days_before_month', '_days_before_year', '_days_in_month', '_divide_and_round', '_format_offset',
'_format_time', '_is_leap', '_isoweek1monday', '_math', '_ord2ymd', '_parse_hh_mm_ss_ff', '_parse_isoformat_date',
'_parse_isoformat_time', '_time', '_time_class', '_tzinfo_class', '_wrap_strftime', '_ymd2ord', 'date', 'datetime',
'sys', 'time', 'timedelta', 'timezone', 'tzinfo']
and I noticed 2 things, one is that the private functions are showing and also that the attribute "datetime_CAPI" does not exist. Which explains why it shows that error. I did the same check on the PC I'm using to build Python and I get:
>>> import datetime as dt
>>> dir(dt)
['MAXYEAR', 'MINYEAR', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__',
'__package__', '__spec__', 'date', 'datetime', 'datetime_CAPI', 'sys', 'time', 'timedelta', 'timezone', 'tzinfo']
Checking that "datetime_CAPI_ attribute shows:
>>> dt.datetime_CAPI
It seems to be somekind of object used to call a C function from Python. But why is it missing?
ANSWER
Answered 2022-Jan-30 at 13:03I found the problem, it was a Python compilation issue. I used the following commands to compile Python and the problem was solved.
CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ AR=arm-linux-gnueabihf-ar \
RANLIB=arm-linux-gnueabihf-ranlib \
./configure --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf \
--build=x86_64-linux-gnu --prefix=$HOME/python3.8.10 \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no \
ac_cv_have_long_long_format=yes --enable-shared
make HOSTPYTHON=$HOME/python3.8.10 \
BLDSHARED="arm-linux-gnueabihf-gcc -shared" CROSS-COMPILE=arm-linux-gnueabihf- \
CROSS_COMPILE_TARGET=yes HOSTARCH=arm-linux BUILDARCH=arm-linux-gnueabihf
make altinstall HOSTPYTHON=$HOME/python3.8.10 \
BLDSHARED="arm-linux-gnueabihf-gcc -shared" CROSS-COMPILE=arm-linux-gnueabihf- \
CROSS_COMPILE_TARGET=yes HOSTARCH=arm-linux BUILDARCH=arm-linux-gnueabihf \
prefix=$HOME/python3.8.10/_install
QUESTION
I am trying to write a special type handling for array data redundancy. The idea is to define and declare an array globally at compile time with fixed size, but the size is different to each declared array. This is the idea:
array[N] = { el1, el2, el3, ..., elN };
At first, I used the following syntax, which works:
#define ARRAY_DEFDEC(name, size, ...) \
int arr_##name[size] = { __VA_ARGS__ }
When I use this macro I get the expected result:
// C code
ARRAY_DEFDEC(array_test, 7, 1, 2, 3, 4, 5, 6, 7);
// Preprocessed
int arr_array_test[7] = { 1, 2, 3, 4, 5, 6, 7 };
Now, for the problem I am having and don't know if it is possible to solve. While doing this, I also need to create a second array for which all the values will be inverted (using ~ operator or alternatively (0 - element + 1)). I have tried ~VA_ARGS, but naturally, it will only change the first element (in the above example with arr_array_test I will get -2, 2, 3, 4, 5, 6, 7).
- Is it possible somehow to apply the
~
operator to all__VA_ARGS__
?
I have a solution which would do the following:
#define ARRAY_DEFDEC(name, size, ...)
int arr_##name[2*size] = { __VA_ARGS__ };
and then it would be used in the following way:
ARRAY_DEFDEC(test, 7, 1, 2, 3, 4, 5, 6, 7, ~1, ~2, ~3, ~4, ~5, ~6, ~7)
This would require quite a lot of logic to be changed and a user needs to know that besides initialising elements, binary inverse needs to be provided, so I do not really prefer to do this.
At this moment in time I am assuming that argument size has the same size as size(__VA_ARGS__
).
The arrays are intended to be used as a global definition (since they need to be accessed by multiple functions).
Note: Since it is an embedded system, external libraries cannot be included. There is not a single standard library on the system (e.g. stdio, stdarg, stdint, etc). This also further limits options. Standard which is used is C99 and compiler is from Green Hill Software.
ANSWER
Answered 2022-Jan-28 at 00:00I feel like a solution to this would be one of those macros that consists of two dozen sub-macros, and those solutions always make me decide to solve the problem some other way. Macros can do some things, but they're not a full programming language and so they're limited in what they can do.
I would just write a small utility to convert the raw data to C code and then #include that. You can compile the utility as part of your compilation process and then use it to compile the rest of your code. So your data.txt could just say "test 1 2 3 4 5 6 7" and your utility would output whatever declarations you need.
QUESTION
I am looking for an efficient way of pixel manipulation in python. The goal is to make a python script that acts as virtual desktop for embedded system. I already have one version that works, but it takes more than a second to display single frame (too long).
Refreshing display 5 times per second would be great.
How it works:
- There is an electronic device with microcontroller and display (128x64px, black and white pixels).
- There is a PC connected to it via RS-485.
- There is a data buffer in microcontroller, that represents every single pixel. Lets call it diplay_buffer.
- Python script on PC downloads diplay_buffer from microcontroller.
- Python script creates image according to data from diplay_buffer. (THIS I NEED TO OPTIMIZE)
diplay_buffer is an array of 1024 bytes. Microcontroller prepares it and then displays its content on the real display. I need to display a virtual copy of real display on PC screen using python script.
How it is displayed:
Single bit in diplay_buffer represents single pixel. display has 128x64 pixels. Each byte from diplay_buffer represents 8 pixels in vertical. First 128 bytes represent first row of pixels (there is 64px / 8 pixels in byte = 8 rows).
I use python TK and function img.put() to insert pixels. I insert black pixel if bit is 1 and white if bit is 0. It is very ineffective. Meybe there is diffrent class than PhotoImage, with better pixel capability?
I attach minimum code with sample diplay_buffer. When you run the script, you will see the frame and execution time.
Meybe there would be somebody so helpful to try optimize it? Could you tell me faster way of displaying pixels, please?
denderdale
Sample frame downloaded from uC
And the code (you can easily run it)
#this script displays value from uC display buffer in a python screen
from tkinter import Tk, Canvas, PhotoImage, mainloop
from math import sin
import time
WIDTH, HEIGHT = 128, 64
ROWS = 8
#some code from tutorial... check what it does:
window = Tk()
canvas = Canvas(window, width=WIDTH, height=HEIGHT, bg="#ffffff")
canvas.pack()
img = PhotoImage(width=WIDTH, height=HEIGHT)
canvas.create_image((WIDTH/2, HEIGHT/2), image=img, state="normal")
#this is sample screen from uC. It is normally periodically read from uC on runtime to refresh screen view.
diplay_buffer =bytes([16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 0, 0, 130, 254, 130, 0, 0, 254, 32, 16, 8, 254, 0, 254, 144, 144, 144, 128, 0, 124, 130, 130, 130, 124, 0, 0, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 16, 16, 16, 16, 16, 0, 0, 0, 18, 42, 42, 42, 36, 0, 28, 34, 34, 34, 28, 0, 0, 16, 126, 144, 64, 0, 32, 32, 252, 34, 36, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 2, 130, 252, 128, 0, 4, 42, 42, 30, 2, 0, 62, 16, 32, 32, 30, 0, 0, 0, 0, 0, 0, 0, 0, 66, 254, 2, 0, 0, 130, 132, 136, 144, 224, 0, 0, 0, 0, 0, 0, 0, 78, 146, 146, 146, 98, 0, 124, 138, 146, 162, 124, 0, 78, 146, 146, 146, 98, 0, 78, 146, 146, 146, 98, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 254, 16, 16, 16, 254, 0, 28, 42, 42, 42, 24, 0, 0, 130, 254, 2, 0, 0, 0, 130, 254, 2, 0, 0, 28, 34, 34, 34, 28, 0, 0, 0, 0, 0, 0, 0, 254, 144, 144, 144, 128, 0, 62, 16, 32, 32, 16, 0, 0, 34, 190, 2, 0, 0, 28, 42, 42, 42, 24, 0, 62, 16, 32, 32, 30, 0, 28, 34, 34, 20, 254, 0, 0, 0, 250, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 124, 130, 130, 130, 68, 0, 4, 42, 42, 30, 2, 0, 62, 16, 32, 32, 30, 0, 0, 0, 0, 0, 0, 0, 50, 9, 9, 9, 62, 0, 28, 34, 34, 34, 28, 0, 60, 2, 2, 4, 62, 0, 0, 0, 0, 0, 0, 0, 28, 34, 34, 34, 28, 0, 63, 24, 36, 36, 24, 0, 32, 32, 252, 34, 36, 0, 0, 34, 190, 2, 0, 0, 62, 32, 30, 32, 30, 0, 0, 34, 190, 2, 0, 0, 34, 38, 42, 50, 34, 0, 28, 42, 42, 42, 24, 0, 64, 128, 154, 144, 96, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 248, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 248, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 254, 146, 146, 146, 108, 0, 4, 42, 42, 30, 2, 0, 28, 34, 34, 34, 20, 0, 254, 8, 20, 34, 0, 0, 0, 0])
def get_normalized_bit(value, bit_index):
return (value >> bit_index) & 1
time_start = time.time()
#first pixels are drawn invisible (some kind of frame in python) so set an offset:
x_offset = 2
y_offset = 2
x=x_offset
y=y_offset
#display all uC pixels (single screen frame):
byteIndex=0
for j in range(ROWS): #multiple rows
for i in range(WIDTH): #row
for n in range(8): #byte
if get_normalized_bit(diplay_buffer[byteIndex], 7-n):
img.put("black", (x,y+n))
else:
img.put("white", (x,y+n))
x+=1
byteIndex+=1
x=x_offset
y+=7
time_stop = time.time()
print("Refresh time: ", str(time_stop - time_start), "seconds")
mainloop()
ANSWER
Answered 2022-Jan-22 at 14:05I don't really use Tkinter, but I have read that using put()
to write individual pixels into an image is very slow. So, I adapted your code to put the pixels into a Numpy array instead, then use PIL to convert that to a PhotoImage
.
The conversion of your byte buffer into a PhotoImage
takes around 1ms on my Mac. It could probably go 10-100x faster if you wrapped the three for
loops into a Numba-jitted function but it doesn't seem worth it as it is probably fast enough.
#!/usr/bin/env python3
import numpy as np
from tkinter import *
from PIL import Image, ImageTk
# INSERT YOUR variable display_buffer here <<<
# Make a Numpy array of uint8, that will become
# ... our PIL Image that will become...
# ... a PhotoImage
WIDTH, HEIGHT, ROWS = 128, 64, 8
na = np.zeros((HEIGHT,WIDTH), np.uint8)
idx = 0
x = y = 0
for j in range(ROWS):
for i in range(WIDTH):
b = display_buffer[idx]
for n in range(8):
na[y+n, x] = (1 - ((b >> (7-n)) & 1)) * 255
idx += 1
x += 1
x = 0
y += 7
# Make Numpy array into PIL Image
PILImage = Image.fromarray(na)
border = 10
root = Tk()
canvas = Canvas(root, width = 2*border + WIDTH, height = 2*border + HEIGHT)
canvas.pack()
# Make PIL Image into PhotoImage
img = ImageTk.PhotoImage(PILImage)
canvas.create_image(border, border, anchor=NW, image=img)
root.mainloop()
Also, I don't know how fast your serial line is, but it may take some time to transmit 1024 bytes, so you could consider starting a second thread to repeatedly read 1024 bytes from your serial and stuff them into a Queue
for the main process to get()
them from.
Also, you could avoid Tkinter altogether, and just use OpenCV imshow()
like this:
#!/usr/bin/env python3
import numpy as np
import cv2
# INSERT YOUR display_buffer here <<<
# Make a Numpy array of uint8, that will be displayed
WIDTH, HEIGHT, ROWS = 128, 64, 8
na = np.zeros((HEIGHT,WIDTH), np.uint8)
idx = 0
x = y = 0
for j in range(ROWS):
for i in range(WIDTH):
b = display_buffer[idx]
for n in range(8):
na[y+n, x] = (1 - ((b >> (7-n)) & 1)) * 255
idx += 1
x += 1
x = 0
y += 7
while True:
# Display image
cv2.imshow("Virtual Console", na)
# Wait for user to press "q" to quit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I decided to have a try with Numba
and the time to extract a 128x64 frame dropped to 68 microseconds. Note that the Python has to be compiled first time through, so I did a warm-up run to include the compilation and then measured the second run:
#!/usr/bin/env python3
import numba as nb
import numpy as np
from tkinter import *
from PIL import Image, ImageTk
import time
# Make a Numpy array of uint8, that will become
# ... our PIL Image that will become...
# ... a PhotoImage
WIDTH, HEIGHT, ROWS = 128, 64, 8
na = np.zeros((HEIGHT,WIDTH), np.uint8)
@nb.njit()
def extract(na,display_buffer):
idx = 0
x = y = 0
for j in range(ROWS):
for i in range(WIDTH):
b = display_buffer[idx]
for n in range(8):
na[y+n, x] = (1 - ((b >> (7-n)) & 1)) * 255
idx += 1
x += 1
x = 0
y += 7
return na
# Following is first run which includes compilation time
warmup = extract(na, display_buffer)
# Only time the second run
start = time.time()
na = extract(na, display_buffer)
# Make Numpy array into PIL Image
PILImage = Image.fromarray(na)
elapsed = (time.time()-start)*1000
print(f'Total time: {elapsed} ms') # Reports 0.068 ms
border = 10
root = Tk()
canvas = Canvas(root, width = 2*border + WIDTH, height = 2*border + HEIGHT)
canvas.pack()
# Make PIL Image into PhotoImage
img = ImageTk.PhotoImage(PILImage)
canvas.create_image(border, border, anchor=NW, image=img)
root.mainloop()
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install IDAPythonEmbeddedToolkit
TRIAGE Define Code & Functions Define Data Define Strings
ANALYSIS Calculate Indirect Offset Memory Accesses Find Memory Accesses
ANNOTATE Identify GPIO Usage Identify "Dead" Code Trace Operand Use
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page