avr | Jeremy Cole 's AVR Projects
kandi X-RAY | avr Summary
kandi X-RAY | avr Summary
Many smaller projects written by Jeremy Cole on Atmel AVR MCUs:.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of avr
avr Key Features
avr Examples and Code Snippets
Community Discussions
Trending Discussions on avr
QUESTION
I'm trying to interface BME680 gas sensor module with AVR controller (Atmega644p to be specific) using Atmel Studio in Windows platform. The BME680 does come with example functions multiple .h,.c files for configuration and one .a (static library). .h and .c files are calling some functions from static library as well, so we need to include all available .h, .c and .a files.
.a file was new to me and after basic web search I concluded that .a files are for Unix as .lib files are in Windows.
So can you find me a way to either:
convert my in hand .a file to .lib file? or to add .a file in Atmel studio in Windows platform?
Any help will be highly appreciated.
I did try to include .a file using following steps:
- In Project=>Properties
- click on the Toolchain tab Under XC8 Linker
- click on Libraries In the Libraries (-l) window
- click the "+" sign and add "libalgobsec" to the list In the Library search path (-L) window
- click on the "+" sign In the "Add Library search path (-L) dialog
- click on the "..." button In the file dialog, navigate to the folder that contains libalgobsec.a
- Click okay.
- Under Project Properties => XC8 Linker => Miscellaneous => Other Objects, add: -u _fstat -u _read -u _write
But it gives this error: Compilation Error.
...ANSWER
Answered 2021-Jun-06 at 10:33After 2 months of experiments, coordination with BOSCH and everything, here is the conclusion.
Pre-compiled library is only compatible with AVR controllers having Boot Memory of 256 Kbytes, and only following 4 controllers make it to list:
- ATmega2561
- ATmega2564RFR2
- ATmega2560
- ATmega256RFR2
So if you try to compile libalgobsec.a available in BSEC software for Atmega controllers with any other controller than mentioned above (in my case Atmega644P), it simply doesn't compile.
QUESTION
I am building a simple Timer/Counter application, that generates a delay using the normal mode in Atmel's ATmega48PA, using Timer1, toggling an LED in a constant time interval. What happens is when using the interrupt, the LED toggles for a definite amount of time, then the toggling effect halts, keeping the LED always ON! I believe there is something with the sei() function or enabling the global interrupt in SREG, as I had experienced such behavior with the same microcontroller before when using interrupts.
Here is a code snippet provided with my question, although anybody will see this code as very normal and have to be working correctly!
...ANSWER
Answered 2021-Jun-06 at 06:38Well at the first look, there is no special problem in your code. So let's check the possibilities:
First you have done some 32-bit long calculations and put the result in the 16-bit register:
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
it results in an unpredictable value that has been entered in the register. so I recommend you to use appropriate values or using (long) and (int) to your code to prevent data overflow.
Second You have not entered the correct data line order:
QUESTION
I'm running into an issue with trait bounds and can't understand what I'm doing wrong. I'm working with the arduino-uno crate from avr-hal and I have a function that reads the ADC, implemented as follows:
...ANSWER
Answered 2021-Jun-05 at 14:56The compiler error says:
the trait bound
&mut T: avr_hal_generic::embedded_hal::adc::Channel
is not satisfied
Notice that the error is asking for &mut T
to implement Channel
, &mut T: Channel<...>
, whereas your T
has the bound T: Channel<...>
— applying to T
itself rather than &mut T
. That's why the bound you've already written isn't helping.
Now, what's the right fix? If I look at the docs you linked, I can find the type of ::read
. (Note: I copied the text from the docs to construct this snippet; I didn't read the source code.)
QUESTION
I want to implement an embedded project using stm32F0 (arm-based) with VS Code. The project ran properly on other systems.
- I Added C/C++ extension to visual studio
- I installed a compiler for cortex-m0 arm: GNU Arm Embedded toolchain/gcc arm for windows.
- Makefiles installed: binaries file + dependencies file
- openOCD installed (open On Chip Debugger)
- tasks.json (build instructions), c_cpp_properties.json (compiler path and IntelliSense settings) were created. I modified the Include path because my program includes header files that aren't in my workspace, and that is not in the standard library path.
c_cpp_properties.json file
...ANSWER
Answered 2021-May-27 at 13:45Cannot open source file avr/io.h (dependency of hal.h)
You appear to be using ChibiOS whhich has a file hal.h which includes halconf.h which includes mcuconf.h. Clearly you appear to have an AVR port of ChibiOS where you need STM32 or ARM Cortex-M support.
But, how VS Code can find dependencies before compiling?
The same way as the compiler/pre-processor do, by having include paths configured, parsing the project files and accounting for any externally defined (command line) macros.
I also was wondering if I should add a path for main.cpp file and other C and CPP files in the configuration file of VS Code to solve these problems?
I believe it will parse project files in any case. It only needs to find the header files included in a source file to provide context for the parsing of the sourcefile.
For debugging, I don't see any debugger in the list, though I installed openOCD and add the path in the environment variable
That is an entirely different question - post a new question for that.
QUESTION
I have two functions, both are similar to this:
...ANSWER
Answered 2021-Jun-01 at 18:57I question the following assumption:
This didn't work. It is clear that the compiler is optimising-out much of the code related to z completely! The code then fails to function properly (running far too fast), and the size of the compiled binary drops to about 50% or so.
Looking at https://gcc.godbolt.org/z/sKdz3h8oP, it seems like the loops are actually being performed, however, for whatever reason each z++
, when using a global volatile z
goes from:
QUESTION
I have separated the code into files (*.c and *.h) and included them. I have guard headers and all the separated files were reported to being build:
...ANSWER
Answered 2021-May-30 at 07:57Sketch.cpp is compiled as as C++, including test.h. In order to support function overloading, class membership etc, C++ uses name mangling to encode these C++ features in the symbol name. As such the symbol name for some_test
in Sketch.cpp is not the same as that in test.c which is compiled as C and no name mabgling is applied..
The solution is to prevent name mangling for this symbol when the header is C++ compiled by specifying that the symbol has C linkage:
QUESTION
I am trying to store a bi-directional graph as an adjacency list using std::map>
. The idea here is to store n nodes, from 1 to n in this map.
The input is given as u v
, which denotes an edge between node u and node v. We get n such inputs on n lines.
My code for storing the graph:
ANSWER
Answered 2021-May-25 at 17:50Your bug is that you are trying to make a function named graph()
remove the parenthesis then all will be fine.
QUESTION
I am looking for a procedure to compile and upload my code for the STM32L432KC nucleo board from the linux terminal like the procedure I used with my atmega328p Here
I kinda got attached to using vim and the gdb debugger and I was so happy doing so for my avr atmega328p with avr-gcc and avra for assembly for a while now But now I wanted to move on and dive deeper into embedded systems so I bought my nucleo board Documentation Page
So I just need a small tutorial like the one above for compiling, linking and flashing the code without the need to install any IDEs
...ANSWER
Answered 2021-May-23 at 12:14The STM32 chips are all cortex-m based (a core they purchase from ARM). And so far they all support the cortex-m0 instruction set (armv6-m). You can follow the ST documentation to see what cortex-m core it has to the technical reference manual at arms website infocenter.arm.com and in there it says which architecture (armv6-m armv7-m armv8-m...) and in there you find out about the instruction set and the architecture. You should not start this journey without the minimum documents. The ARM TRM and ARM ARM for the core and architecture. and the REFERENCE manual from ST (not the programmers manual from either them) and the datasheet from ST.
The cortex-ms boot off of a vector table, described in the architectural reference manual (ARM). The first word is loaded into the stack pointer the second is the reset vector and it is defined as requiring the lsbit to be a 1 (indicating this is a thumb function address). And you can read about the rest. To make a minimal example that is good enough.
All of the STM32 chips I have worked with (I have worked with a ton of them) support a user flash based at 0x08000000 and SRAM at 0x20000000, some of the newer firmware that comes with nucleo boards will insist on the proper 0x08000000 address in the vector table (some small percentage also support a faster memory address at 0x00200000). The ARM documentation will say 0x00000000 basically or indicate a VTOR thing but in reality it is generally 0x00000000 as the address that the logic looks for to find the vector table on reset. Various ways to skin this cat but ST chooses to mirror a percentage of the flash to 0x00000000.
So a very simple example to get you started.
Bootstrap, flash.s
QUESTION
Having an intent to study a sort algorithm (of my own), I decided to compare its performance with the classical quicksort
and to my great surprise I've discovered that the time taken by my implementation of quicksort
is far not proportional to N log(N)
. I thoroughly tried to find an error in my quicksort
but unsuccessfully. It is a simple version of the sort algorithm working with arrays of Integer
of different sizes, filled with random numbers, and I have no idea, where the error can sneak in. I have even counted all the comparisons and swaps executed by my code, and their number was rather fairly proportional to N log(N)
. I am completely confused and can't understand the reality I observe. Here are the benchkmark results for sorting arrays of 1,000, 2,000, 4,000, 8,000 and 16,000 random values (measured with JMH
):
ANSWER
Answered 2021-May-18 at 21:03Three points work together against your implementation:
- Quicksort has a worst case complexity of O(n^2)
- Picking the leftmost element as pivot gives worst case behavior on already sorted arrays (https://en.wikipedia.org/wiki/Quicksort#Choice_of_pivot):
In the very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays
- Your algorithm sorts the arrays in place, meaning that after the first pass the "random" array is sorted. (To calculate average times JMH does several passes over the data).
To fix this, you could change your benchmark methods. For example, you could change sortArray01000()
to
QUESTION
I'm working with an ATmega168p and compiling with avr-gcc.
Specifically, I have an RS485 slave that receives bytes via UART and writes them to a buffer in an ISR. If an end character is received, a flag is set in the ISR. In my main loop this flag is checked and the input buffer is processed if necessary. However, there is the problem that some time can pass between the arrival of the end byte and the time when the handler in the main loop processes the input buffer, because of the other "stuff". This results in a latency which can be up to several milliseconds, because e.g. sensors are read in every n-th iterations.
...ANSWER
Answered 2021-May-16 at 12:47Don't try to use setjmp()
/longjmp()
to re-enter a main-level function from an ISR. This calls for disaster, because the ISR is never finished correctly. You might like to use assembly to work around, but this is really fragile. I'm not sure that this works at all on AVRs.
Since your baudrate is 38400, one byte needs at least some 250µs to transfer. Assumed that your message has a minimum of 4 bytes, the time to transfer a message is at least 1ms.
There are multiple possible solutions; your question might be closed because they are opinion-based...
However, here are some ideas:
Time-sliced main tasksSince a message can arrive only once per millisecond or less, your application don't need to be much faster than that.
Divide your main tasks into separated steps, each running faster than 1 ms. You might like to use a state machine, for example to allow slower I/O to finish.
After each step, check for a completed message. Using a loop avoids code duplication.
Completely interrupt-based applicationUse a timer interrupt to do the repeated work. Divide it in short tasks, a state machine does magic here, too.
Use an otherwise unused interrupt to signal the end of the message. Its ISR may run a bit longer, because it will not be called often. This ISR can handle the message and change the state of the application.
You need to think about interrupt priorities with much care.
The endless loop in main()
will effectively be empty, like for (;;) {}
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install avr
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page