durabletask | Durable Task Framework allows users to write long running | Job Scheduling library
kandi X-RAY | durabletask Summary
kandi X-RAY | durabletask Summary
Durable Task Framework allows users to write long running persistent workflows in C# using the async/await capabilities.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of durabletask
durabletask Key Features
durabletask Examples and Code Snippets
Community Discussions
Trending Discussions on durabletask
QUESTION
I am trying to debug Azure functions python code using VS code IDE.
Local.settings.json is updated with below config
ANSWER
Answered 2022-Mar-19 at 02:34There were many workarounds to resolve ECONNREFUSED 127.0.0.1:9091
in Visual Studio Code - Stack: Python Azure Functions:
Approach 1:
Modify the tasks.json
(available in .vscode
folder in VS Code) like the below one:
Approach 2:
If you don't want to use the above modifications of task.json
in every new project whenever this error ECONNREFUSED 127.0.0.1:9091
occurs, then you can use this workaround given in the GitHub vsCode-AzureFunctions Issue No: 760.
- VS Code > Your Python Azure Functions Project > View Menu > Open the Command Palette or Ctrl + Shift + P.
- Type User Settings and Select: Go to Features in User Menu > Select Terminal > Make the setting
"terminal.integrated.shell.windows"
value should bepowershell.exe
.
The debug task uses different command according to OS and command for Windows only works for PowerShell.
Note:
- As per my research, the error message is too generic. Different Systems/Projects of same type (Azure Functions Python) are having different solutions as I was experienced in.
- For Few Systems/Projects, changing the port works. As 7071 is the default port for executing HTTP functions in the tooling whereas the debugger needs to attach over a different port.
- While changing the debug port, it should be changed in both
launch.json
andtasks.json
.
There are two distinct ports at work when the host is started with Python debugging capabilities: .
7071
is the default port where the HTTP endpoint is exposed by the host.
Where is this set?
9091
is the default port used for starting the debugging endpoint for the Python worker. This is needed for remote attaching to the worker. This needs to be the same intasks.json
(-m ptvsd --host 127.0.0.1 --port 9091
) andlaunch.json
. These are set to9091
Both of these need to be distinct from each other but, other than that, it doesn't matter what values they have. These settings should be handled by the VSCode function creation experience so that conflicts do not arise.
Approach 3:
As specified in the MSFT Q&A for the error ECONNREFUSED 127.0.0.1:9091
in Azure Functions Python, could you please explicit the Azure Functions Extension Bindings/Bundles explicitly as python is Non .NET Language and run/debug the Function.
Approach 4:
As mentioned in the GitHub - Azure Functions - Issue No 1016, two other workarounds of this kind of issue exists longback were:
- Change the PowerShell separator (
;
) to the cmd separator (&&
) in your.vscode/tasks.json
file.- Change your terminal to PowerShell. See here for more info: https://code.visualstudio.com/docs/editor/integrated-terminal#_configuration
QUESTION
I have migrated my Azure functions project from .Net Core 2.2 to .Net Core 3.1. So, I have also updated other nuget packages to support .Net Core 3.1. After that, when I run my functions project it loads all the functions and after loading it fails continuously with below error.
...ANSWER
Answered 2022-Mar-29 at 11:35I have created .NET Core 2.2 Azure Functions Project (Http Trigger) and added the same NuGet Packages along with the same versions provided.
And migrated to .NET Core 3.1 with the same NuGet versioned Packages provided but in the local.settings.json
file, replaced the AzureWebJobsStorage
value from local storage to Azure Storage Account Connection String.
With the local storage emulator (
"AzureWebJobsStorage": "UseDevelopmentStorage=true"
), it didn't work.
I believe that the error is due to storage connection string mismatch from the given error details.
Result:
QUESTION
I wanted to run a shell script via jenkin pepeline on cygwin. I am able to do it in a free style project but when i converted the freestyle project into pipelin. The pipeline project is not working. The root of the problem i think is "Caused: java.io.IOException: Cannot run program "nohup" Can anyone suggest any solution?
...ANSWER
Answered 2022-Feb-04 at 13:36So, I was able to find the solution. By doing the following
D://CygwinPortable//App//Cygwin//bin//bash.exe --login -c "cd '${env.WORKSPACE}'/ShellScriptDirectory; ./filename.sh "
PS: I used portable cygwin here
QUESTION
In my orchestrator function, I upload a request file to an external server. After some non-deterministic amount of time, a response file should have been generated. I need to poll for this file and download it.
My current approach is to wait 10mins after upload. And then use the built-in CallActivityWithRetryAsync
with RetryOptions
. After the first poll/download failure, wait 5mins before starting a total of 10 retry attempts. A retry should only be attempted if an exception with the message RESPONSE_FILE_NOT_YET_AVAILABLE
is thrown within the activity function.
ANSWER
Answered 2022-Jan-31 at 11:19Here are the few links with related discussions. Can you try rechecking your function app according to this link for resolving your issue.
- When the
CallActivityWithRetryAsync
call is made theDurableOrchestrationContext
calls theScheduleWithRetry
method of theOrchestrationContext
class inside the DurableTask framework. - There the
Invoke
method on theRetryInterceptor
class is called and that does a foreach loop over the maximum number of retries. This class does not expose properties or methods to obtain the number of retries.
QUESTION
After upgrading the jenkins plugin Kubernetes Client to version 1.30.3 (also for 1.31.1) I get the following exceptions in the logs of jenkins when I start a build:
...ANSWER
Answered 2022-Jan-05 at 11:55Downgrade the plugin to kubernetes-client-api:5.10.1-171.vaa0774fb8c20. The latest one has the compatibility issue as of now.
new info: The issue is now solved with upgrading the Kubernetes plugin to version: 1.31.2 https://issues.jenkins.io/browse/JENKINS-67483
QUESTION
I recently tried decrypting and verifying PKCS#7 message in C# .NET Core 3.1 Web Application. Here is how I did it.
I used System.Security.Cryptography.Pkcs
in Web app project. But after implementing same package and trying to decrypt and verify in Azure function, it returns exception
{"Could not load file or assembly 'System.Security.Cryptography.Pkcs, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.":"System.Security.Cryptography.Pkcs, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"}
.
I tried several stack overflow answers but couldn't sort it.
System.Security.Cryptography.X509Certificates
is working fine in Azure function, but Pkcs
seems to have issue.
Anyway I can sort it out?
Here is my .csproj:
...ANSWER
Answered 2021-Dec-08 at 08:26Mishra](https://stackoverflow.com/users/13378259/rocky-mishra) Glad that your issue got resolved and posting your suggestion as answer to help the other community members who face similar kind of issue.
Below example code with helping in resolving the issue.
QUESTION
One of our azure function v3 apps went from 200mb of app insight ingestion to ~18gb. We did not add any additional logging statements, change any sdks, or trigger any additional function executions. We do not specify an app insights sdk in our project so its using what Azure has installed. Running the recommended query below from Microsoft to show sampling percent makes it obvious something changed with adaptive sampling.
...ANSWER
Answered 2021-Sep-11 at 22:45Adaptive sampling is on per app instance basis. So, if load decreased per node (either load decreased overall or you refactored your app {switched to some other plan, etc.} and now have way smaller instances, etc.) then this can explain the numbers.
To check whether this is the case you can output the following columns:
QUESTION
I have an Azure Function
that is using durable functions
:
local.settings.json
...ANSWER
Answered 2021-Sep-06 at 08:36I come from the Azure SignalR Service team.
It is not recommended to use ConnectionString
while configuring AAD Auth.
Even us is updating our docs to recommend our customers to configure AAD Auth in codes.
I'm not sure what feature of Azure storage that you're using. For Blobs
, you could configure a BlobClient
through the following codes:
QUESTION
We have a long-running process in an app service (can require over 5 mins) that was timing out due to the non-configurable 230-second timeout on Azure's built-in load balancer.
So we refactored to the async-http API pattern using Azure Durable Functions. We have a single activity function that cannot be easily broken down into smaller bits of work for reasons that are beyond the scope of this question.
I noticed strange results in the output log, and determined that the activity function gets restarted by Azure Functions after a few minutes. I set a breakpoint in the activity function and it gets hit (again) after a few minutes.
This isn't something I configured myself; my calling code that starts the function only executes once. What's going on? How can I make the activity function run to completion?
It works fine and completes as expected when the workload is under a few minutes.
The function app code looks something like this:
...ANSWER
Answered 2021-Aug-30 at 20:11It turns out the "restarts" were really just leftover invocations. At some point I'd killed the process (this is all local debug using Azure Functions Tools) and it left some state around in my Azure Storage emulator. So my local Azure tools thought the durable function was still running and/or needed to be restarted.
To get rid of the zombie invocations I just deleted all the "testhub" tables, queues, and blobs from my emulator using Azure Storage Explorer. These get auto-generated when you run a durable function locally.
QUESTION
I have a function where it connects to external API and because I can't have too many requests in a short period of time I'm trying to use OrchestrationTrigger. All seems to work until the last part where at the end I update the database and I get the following error:
...ANSWER
Answered 2021-Jul-05 at 16:18Changing builder.Services.AddSingleton
to builder.Services.AddTransient
did the trick.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install durabletask
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page