iot-devices | Helper library for interfacing with devices in IoT projects | iOS library
kandi X-RAY | iot-devices Summary
kandi X-RAY | iot-devices Summary
Helper library for interfacing with devices in IoT projects.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of iot-devices
iot-devices Key Features
iot-devices Examples and Code Snippets
Community Discussions
Trending Discussions on iot-devices
QUESTION
I am trying to securely connect multiple devices(200+) to Microsoft Azure IoT Central. I have an android app running api 19 that connects a single device via https to IoT Central.
I am following the tutorial for SaS group enrollment.
I understand that I need a connection string to connect to IoT central which is composed of the underlying IoT Hub name, device primary key and device id(which can be the device imei or something so that can be auto generated).
However inserting the primary key for each device would require modifying the app for 200+ devices.
In order to auto generate the device primary key it can be derived from the the SAS-IoT-Devices group master key by running: az iot central device compute-device-key --primary-key --device-id
or in my case using android studio with the code:
ANSWER
Answered 2021-Mar-11 at 20:09In absence of unique hardware root of trust, your security posture will always be relatively weak.
One option is to generate device specific key in a Azure service, e.g. Azure Function which can use the master Key stored in a Azure Key vault. The android app will still need to attest its unique identity with the function and request device specific identities. This will avoid having a common master key in the app.
If you have an option to take advantage of unique ID on Android, e.g. FID (https://developer.android.com/training/articles/user-data-ids), it can be used to attest the app identity with the function.
Other option is to generate key pair per device and use that to create CSR, get device specific X509. It will add more complexity and still need bootstrap attestation mechanism.
QUESTION
The immediate issue I am trying to overcome is that my aws-lambda function is not connecting to my broke using the js MQTT library. I am able to use this library in a local node environment to connect, just not in the aws-lambda function.
I have created a zip file from this repo: https://github.com/JordanKlaers/AlexaMQTT
That I uploaded to my lambda function. I am using the exported function from index.js.
Everything works well except for the part where it does not connect to the broker/client (line 83 in index.js) When I run oldIndex.js from the repo I linked (which is just the promise function that connects, from the aws-lambda function) in my local node environment it connects and things run correctly.
I dont know how to create a minimum reproducible sketch because its success is based on interacting with hardware. I did create the "oldIndex.js" as a minimum sketch to show that at least the function to connect works. I have included logs of the lambda function to show that it works as expected up to the attempt to connect.
The only thing I can speculate would be some issue with my permissions for the role used with the lambda function but I have researched and added different policies to my role but that hasnt helped.
Here are the logs from the function when called (which shows that it gets to the promise and attempts to connect but doesnt succeed)
I had done almost everything myself, but got some final clarification on my approach from this tutorial so I not sure what else Im not considering/missing.
...ANSWER
Answered 2019-Dec-28 at 21:57The main problem here is that your broker is running on a Pi attached to your local home network.
This means it is behind a Home Broadband Router which is performing Network Address Translation (NAT). This takes packets from your home network (10.0.0.0/24) and remaps them to have come from your public facing IP address.
This means that the Lambda code (running on AWS) can not directly send packets to the broker, so has no way to connect.
There are several possible solutions to this, but here are a couple.
- Run a broker on a cloud hosting provider. You will be able to then reach this from anywhere.
- Enable port forwarding on your router to expose port 1883 to the internet and forward any packets to the broker running on your raspberry pi. (This option depends on you having a fixed IP address or dynamic DNS)
For both of these you will probably want to enable authentication/authorisation on the broker and also probably add TLS.
You also need to look closer at the MQTT.js library and how to enable the error tracking so you can see why things fail e.g.
QUESTION
I'm using the Linux JVM Debugger for IntelliJ by At Sebak(https://plugins.jetbrains.com/plugin/7738-embedded-linux-jvm-debugger-raspberry-pi-beaglebone-black-intel-galileo-ii-and-several-other-iot-devices-) to run my Javalin Api with Pi4J Library's Remotely on my Raspberry Pi.
My question is, Where can i find or set the Pi directory where the Linux Debugger runs.(so i can put files in the directory for the Api to open)
...ANSWER
Answered 2019-Nov-03 at 12:45When running the JVM Debugger configuration it shows exactly where it runs in the run console of IntelliJ.
I could also find out where it runs by asking the location of the current class with this line of code:
final File f = new File(Light.class.getProtectionDomain().getCodeSource().getLocation().getPath());
QUESTION
The data source is from Databricks Notebook demo:Five Spark SQL Helper Utility Functions to Extract and Explore Complex Data Types!
But when I try these code on my own laptop, I always get errors.
First, load JSON data as DataFrame
...ANSWER
Answered 2017-Sep-27 at 17:42The error message clearly shows the source of the problem:
org.apache.spark.sql.AnalysisException: Required attribute 'value' not found;
The Dataset
to be written has to have at least value
column (and optionally key
and topic
) and res2
has only battery_level
, c02_level
.
You can for example:
QUESTION
Context: Each customer can have 100-1000 IoT-devices behind NAT. We have multiple customers. The aim is to manage these devices from outside. Devices use CoAP protocol, which uses by default udp.
There are few constraints.
- It is not possible to activate Port-Forwarding.
- It is not possible to open a VPN connection.
- Any changes in local network of IoT devices are not possible.
Problem We'd like to open anytime a connection to device from outside. But there is the NAT which prevents it.
Options As I understand, the device has to open initial request in order to communicate.
Which of the following options is the best one regarding scalability and efficiency?
- Each node sends udp pings in order to keep NAT connection open.
- Each node uses TCP and sends keepalive to keep NAT connection open.
- Each node communicates with udp to local proxy behind NAT. The proxy does mapping from coap to http. The proxy establishes connection to server with TCP and it sends keepalive in order to keep NAT connection open.
- Same as option 3 but the local proxy uses WebSocket instead normal TCP.
Thank you very much
...ANSWER
Answered 2017-Oct-05 at 12:28The official LWM2M answer to this is queuing mode, see slide 30 of https://www.slideshare.net/OpenMobileAlliance/oma-lwm2m-tutorial-by-arm-to-ietf-ace or slide 19 of https://mbed-media.mbed.com/filer_public/c1/c3/c1c35bec-5f0e-4a28-a422-115248c9a181/armmbed-lwm2m-webinar.pdf for more information. So the proposed solution is not listed under 1. to 4. above, but uses LWM2M protocol to send a "ping" in form of a registration update.
From a security viewpoint, if you deploy to public internet, I would suggest to:
a) you MUST use DTLS
b) you should support device firmware update and be able to deploy new firmware with patches very fast.
Personal view: LWM2M is broken by design by starting with the (wrong) idea that IoT devices are servers.
QUESTION
I am looking at serverless architecture to process some customer data. The process itself is probably quite quick, but for various reasons I would like the cloud service provider to gurantee executional isolation. So far, I've talked to a rep from Amazon, who said that Amazon Lambda are not effectively isolated, and the lambda container may end up being reused.
Effectively, when running a function and, say, writing something to memory or disk (here we might not have control, as part of the solution would let customers execute arbitrary code) I would like a sandbox isolation gurantee.
I've read that Microsoft was going to offer such isolation, but apart from a news story, I couldn't find and concrete information. There they alude to extra costs of sandboxing functions for example.
So is there any provider that could gurantee executional isolation?
...ANSWER
Answered 2017-Sep-18 at 09:57Apparently Google Cloud Functions is guaranteeing isolated execution:
Run in a fully-managed, serverless environment where Google handles infrastructure, operating systems, and runtime environments completely on your behalf. Each Cloud Function runs in its own isolated secure execution context, scales automatically, and has a lifecycle independent from other functions.
Emphasis mine
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install iot-devices
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page