azure-sdk-for-python | active development of the Azure SDK | Azure library
kandi X-RAY | azure-sdk-for-python Summary
kandi X-RAY | azure-sdk-for-python Summary
This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public developer docs or our versioned developer docs.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Creates a new type definitions .
- Create a glossary term .
- Perform an image search .
- Performs an operation on the specified object .
- Performs a search .
- Performs a partial update of an entity .
- Performs a bulk or update operation on a collection .
- Creates or updates a resource set rule .
- Converts span to envelope .
- Perform spell checker .
azure-sdk-for-python Key Features
azure-sdk-for-python Examples and Code Snippets
# Here code where I get storage account name and key. And Authentication by SPN which in KV.
...
customer_name = 'abc-qa'
container_name_source = "conteiner1"
blob_name_source = customer_name+"-blobfolder1/blobfolder2"
container_name_ta
dbutils.notebook.entry_point.getDbutils().notebook().getContext() \
.browserHostName().get()
dbutils.notebook.entry_point.getDbutils().notebook().getContext() \
.apiUrl().get()
import j
default_scope = "https://graph.microsoft.com/.default"
def get_token():
credential = DefaultAzureCredential()
token = credential.get_token(default_scope)
return token[0]
pip install --upgrade pywin32==225
conda install pywin32
python [environment path]/Scripts/pywin32_postinstall.py -install
# It does not appear to be documented but make_write_options
# should accept most of the kwargs that write_table does
file_options = ds.ParquetFileFormat().make_write_options(version='2.6', data_page_version='2.0')
ds.write_dataset(..., fi
import glob
from os.path import isfile
mypath = "./temp/*"
docsOnDisk = glob.glob(mypath)
verified_docsOnDisk = list(filter(lambda x:isfile(x), docsOnDisk))
from pulumi_azure_native import documentdb
containers_name = {
'mytest1': '/test1',
'mytest2': '/test2',
'mytest3': '/test3',
}
# Create Containers
for container in containers_name.keys():
sql_api_resource_con
app.layout = html.Div([
dbc.Row([html.H6('Color Palette',className='text-left'),
dcc.Dropdown(id='color_range',placeholder="Color", # Dropdown for heatmap color
options=colorscales,
value='P
from http.client import HTTPResponse
import logging
import azure.functions as func
def main(req: func.HttpRequest,outputblob: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
Community Discussions
Trending Discussions on azure-sdk-for-python
QUESTION
Repost of https://github.com/Azure/azure-sdk-for-python/issues/6731#issuecomment-1028393257
I am testing the retry parameters on ServiceBusClient, it is not clear if/how they work.
Am I doing something wrong, do I not understand how retry works? In below I expect the message would be deliver three times in 30 seconds. Instead is delivered 10 times with about 150 milliseconds between deliveries.
...ANSWER
Answered 2022-Feb-03 at 22:13How retry_backoff_factor
is interpreted depends on the retry_mode argument. By default it is set to "exponential", set retry_mode="fixed"
for a constant retry time.
The retry mechanism in general is only relevant for errors that occur within the SDK, for example connection timeouts. You can simulate this by setting retry_total=1, retry_backoff_factor=10, retry_mode="fixed"
, turning your Internet connection off and start your code - an exception should be raised after 10 seconds. If you now change that to retry_total=3, retry_backoff_factor=10, retry_mode="fixed"
you'll see the exception in 30 seconds, within that time frame the client has tried to receive messages three times.
QUESTION
I have an Azure Servicebus and want to retrieve all topics that are available based on my connection string.
In the Microsoft docs I was able to see that there is a "GetTopics" function for C# - is there something similar available within the Python SDK? I cant find anything in the source code of the azure-sdk-for-python....
...ANSWER
Answered 2022-Jan-07 at 18:12The method you are looking for is list_topics
in ServiceBusAdministrationClient
class.
Here's the sample code taken from here
:
QUESTION
I an Azure Pipeline on a self-hosted agent I use this task
...ANSWER
Answered 2021-Nov-27 at 06:57This issue is caused by Azure CLI version 2.30.0 which seemed to be rolled out MS hosted agents recently.
Hence I adapted all my Python scripts running on (MS and self) hosted agents to this model:
QUESTION
I am working my way through the Python examples of CosmosDB (see CosmosDB for Python) and I see a container definition as follows:
...ANSWER
Answered 2021-Sep-29 at 08:58In my opinion, partition is something which applies over keys sharing some common group, for example partition over food groups.
This is not entirely true. If you look at the documentation, it says that you should choose a partition key that has a high cardinality. In other words, the property should have a wide range of possible values. It should be a value that will not change. You also need to note that if you want to update or delete a document, you will need to pass the partition key.
What happens in the background, is Cosmos can have multiple servers from 1 to infinity. It uses your partition key to logically partition your data. But it is still on one server. If your throughput goes beyond 10K RU or if your storage goes beyond 50GB, Cosmos will automatically split into 2 physical servers. This means your data is split into the 2 servers. The splitting can go on until the max throughput per server is < 10K RU and storage per server is < 50GB. This is how Cosmos can manage infinite scale. You may ask how would you predict which partition a document may go into. The answer is you can't. Cosmos produces a hash using your partition key with a value between 1 and the number of servers.
So the doc id is a good partition key because it is unique and can have a large range of values.
Just be aware that once Cosmos partitions to multiple servers, there is no automatic way currently to bring the number of servers down even if you reduce the storage or reduce the throughout.
QUESTION
I am attempting to move to Azure Blob Storage from AWS S3, but am facing some issues with generating the customer provided encryption key. For reference, in AWS, it is possible to have server-side encryption enabled without too much trouble (AWS Server-Side Encryption). In Azure, the same should be possible using a CustomerProvidedEncryptionKey.
The requirements from Microsoft to create CustomerProvidedEncryptionKey are as follows (Microsoft Docs on CPK):
...ANSWER
Answered 2021-Oct-21 at 22:44The issue with the code snippet was the encoding of the key hash. Since the hexdigest of the hash is a python string object that represents a hex string, we must take special care to decode it and treat its type as hex. Additionally, we must re-encode the base64 encoded strings into python string objects before passing it to the CustomerProvidedEncryptionKey.
See https://gist.github.com/CodyRichter/a18c293d80c9dd71a3905bf9c44e377f for the complete working code
QUESTION
I am unclear as to how I should establish a service principal following the guide laid out here: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/keyvault/azure-keyvault-secrets/README.md#create-a-service-principal-optional
I have an Azure App Service (Web App) that is loaded with a docker image. This App Service also has an app registration tied to it for authentication.
I can execute the code: az ad sp create-for-rbac --name http://my-application --skip-assignment
against three potential targets:
- My Azure App Service web app name
- My Azure AD App Registration name
- A completely new name
What do I select and why?
Later in the guide, it asks that three environmental variables be set like so:
...ANSWER
Answered 2021-Sep-15 at 10:19If you want to access an Azure Key Vault from an Azure Web App for Containers in Python, the recommended way is to use Managed Identity instead of creating and managing your own Service Principal. Managed Identity is supported in Web App for Containers.
In the Python container, you can access the Azure Key Vault secret using the managed identity with the following lines of code
QUESTION
Background: FastAPI + CosmosDB
The official pagination code example for the CosmosDB python SDK is show below. It only shown how to get the next page via page iterator. How can I get the pagination data by a specific page number?
...
ANSWER
Answered 2021-Sep-06 at 11:50For pagination by specific page number, you can make use of OFFSET LIMIT
clause available in Cosmos DB SQL API.
All you need to do is specify the offset
and limit
clause in your query itself. Something like:
QUESTION
having searching quite while and haven't found the MaxItemCount for cosmos DB pagination in python azure SDK from the official web site and the code sample
The REST-api is written by FastAPI framework, use Azure cosmos DB as the storage, pagination hasn't been implemented. The cosmos sdk I'm using is version 3.1.2
...ANSWER
Answered 2021-Aug-16 at 12:32For SDK version 3.x (that you're using), please try by definining maxItemCount
in the query options. Your code would be something like:
QUESTION
Most (all?) of the Azure Storage Python SDK examples I've seen demonstrate creating a BlobServiceClient
in order to then create a BlobClient
for uploading / downloading blobs (ref1, ref2, etc.).
Why create a BlobServiceClient
then a BlobClient
instead of just directly creating a BlobClient
?
Example:
...ANSWER
Answered 2021-Aug-02 at 06:14Why create a BlobServiceClient then a BlobClient instead of just directly creating a BlobClient?
BlobClient
only allows you to work with blobs so if you want to work with just blobs, you can directly create a BlobClient and work with it. No need for you to create a BlobServiceClient first and then create BlobClient from it.
BlobServiceClient
comes into picture If you want to perform operations at the blob service level like setting CORS or list blob containers in a storage account. At that time you will need BlobServiceClient.
QUESTION
I have a azure data-explor that queries some syslogs coming-in, filters and aggregate them. The output of this query is store on my local computer in a csv file. So every time a run my Python SDK, it runs a query and saves the output in a csv file.
What I am looking for, is to push that result of the query to a cosmosdb.
Looking into azure GitHub azure-sdk-for-python, I found a library that can achieve this result with this code.
...ANSWER
Answered 2021-Jun-02 at 15:20In Cosmos DB terminology, Container
is equivalent to a Table
as Container holds the data like Table. If you're coming from a relational database world, here's the mapping (kind of):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install azure-sdk-for-python
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page