kandi background
Explore Kits

beam | Apache Beam is a unified programming model

 by   apache Java Version: v2.38.0 License: Non-SPDX

 by   apache Java Version: v2.38.0 License: Non-SPDX

Download this library from

kandi X-RAY | beam Summary

beam is a Java library typically used in Telecommunications, Media, Media and Entertainment, Big Data applications. beam has high support. However beam has 646 bugs, it has 10 vulnerabilities, it build file is not available and it has a Non-SPDX License. You can download it from GitHub.
Beam provides a general approach to expressing embarrassingly parallel data processing pipelines and supports three categories of users, each of which have relatively disparate backgrounds and needs.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • beam has a highly active ecosystem.
  • It has 5450 star(s) with 3455 fork(s). There are 257 watchers for this library.
  • There were 2 major release(s) in the last 6 months.
  • beam has no issues reported. There are 144 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of beam is v2.38.0
beam Support
Best in #Java
Average in #Java
beam Support
Best in #Java
Average in #Java

quality kandi Quality

  • beam has 646 bugs (94 blocker, 17 critical, 314 major, 221 minor) and 23987 code smells.
beam Quality
Best in #Java
Average in #Java
beam Quality
Best in #Java
Average in #Java

securitySecurity

  • beam has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • beam code analysis shows 10 unresolved vulnerabilities (3 blocker, 4 critical, 3 major, 0 minor).
  • There are 369 security hotspots that need review.
beam Security
Best in #Java
Average in #Java
beam Security
Best in #Java
Average in #Java

license License

  • beam has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
beam License
Best in #Java
Average in #Java
beam License
Best in #Java
Average in #Java

buildReuse

  • beam releases are available to install and integrate.
  • beam has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are available. Examples and code snippets are not available.
  • beam saves you 2104274 person hours of effort in developing the same functionality from scratch.
  • It has 859817 lines of code, 72651 functions and 6263 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
beam Reuse
Best in #Java
Average in #Java
beam Reuse
Best in #Java
Average in #Java
Top functions reviewed by kandi - BETA

kandi has reviewed beam and discovered the below as its top functions. This is intended to give you an instant insight into beam implemented functionality, and help decide if they suit your requirements.

  • Parse a DOFn signature .
  • Extracts extra context parameters from doFn .
  • Returns stream of artifact retrieval service .
  • Provides a list of all transform overrides .
  • Main entry point .
  • Process the timers .
  • Translate ParDo .
  • Send worker updates to dataflow service .
  • Creates a Function that maps a source to a Source .
  • Convert a field type to proto .

beam Key Features

End Users: Writing pipelines with an existing SDK, running it on an existing runner. These users want to focus on writing their application logic and have everything else just work.

SDK Writers: Developing a Beam SDK targeted at a specific user community (Java, Python, Scala, Go, R, graphical, etc). These users are language geeks and would prefer to be shielded from all the details of various runners and their implementations.

Runner Writers: Have an execution environment for distributed processing and would like to support programs written against the Beam Model. Would prefer to be shielded from details of multiple SDKs.

Couchbase with Azure Linux VM

copy iconCopydownload iconDownload
couchbase:// 
http://
-----------------------
couchbase:// 
http://

Colab: (0) UNIMPLEMENTED: DNN library is not found

copy iconCopydownload iconDownload
!pip install tensorflow==2.7.0
-----------------------
'tensorflow==2.7.0',
'tf-models-official==2.7.0',
'tensorflow_io==0.23.1',

Apache Beam Cloud Dataflow Streaming Stuck Side Input

copy iconCopydownload iconDownload
mytopic = ""
sql = "SELECT station_id, CURRENT_TIMESTAMP() timestamp FROM `bigquery-public-data.austin_bikeshare.bikeshare_stations` LIMIT 10"

def to_bqrequest(e, sql):
    from apache_beam.io import ReadFromBigQueryRequest
    yield ReadFromBigQueryRequest(query=sql)
     

def merge(e, side):
    for i in side:
        yield f"Main {e.decode('utf-8')} Side {i}"

pubsub = p | "Read PubSub topic" >> ReadFromPubSub(topic=mytopic)

side_pcol = (p | PeriodicImpulse(fire_interval=300, apply_windowing=False)
               | "ApplyGlobalWindow" >> WindowInto(window.GlobalWindows(),
                                           trigger=trigger.Repeatedly(trigger.AfterProcessingTime(5)),
                                           accumulation_mode=trigger.AccumulationMode.DISCARDING)
               | "To BQ Request" >> ParDo(to_bqrequest, sql=sql)
               | ReadAllFromBigQuery()
            )

final = (pubsub | "Merge" >> ParDo(merge, side=beam.pvalue.AsList(side_pcol))
                | Map(logging.info)
        )                    
    
p.run()

Access Apache Beam metrics values during pipeline run in python?

copy iconCopydownload iconDownload
counters = result.metrics().query(beam.metrics.MetricsFilter())['counters']
for metric in counters:
    print(metric)

Type error with simple where-clause with Haskell's beam

copy iconCopydownload iconDownload
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32,
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be)))))
        Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
type MagicSql be = 
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be))))) 

selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32, MagicSql be Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
      
-----------------------
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32,
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be)))))
        Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
type MagicSql be = 
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be))))) 

selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32, MagicSql be Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
      
-----------------------
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32,
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be)))))
        Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
type MagicSql be = 
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be))))) 

selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32, MagicSql be Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
      
-----------------------
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32,
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be)))))
        Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
selectFoosByBar bar = select $
    filter_ (\foo -> _fooBar foo ==. val_ bar) $
        all_ $ _bazFoos bazDb
type MagicSql be = 
      HasSqlValueSyntax
        (Sql92ExpressionValueSyntax
           (Sql92SelectTableExpressionSyntax
              (Sql92SelectSelectTableSyntax
                 (Sql92SelectSyntax (BeamSqlBackendSyntax be))))) 

selectFoosByBar
  :: (HasQBuilder be, HasSqlEqualityCheck be Word32, MagicSql be Word32) =>
     Word32 -> SqlSelect be (FooT Identity)
      

Tensorflow Object Detection API taking forever to install in a Google Colab and failing

copy iconCopydownload iconDownload
pip install --upgrade pip 

Apache Beam update current row values based on the values from previous row

copy iconCopydownload iconDownload
static class SortAndForwardFillFn extends DoFn<KV<String, Iterable<Row>>> {

    @ProcessElement
    public void processElement(@Element KV<String, Iterable<Row>> element, OutputReceiver<KV<String, Iterable<Row>>> outputReceiver) {

        // Create a formatter for parsing dates
        DateTimeFormatter formatter = DateTimeFormat.forPattern("dd/MM/yyyy HH:mm:ss");

        // Convert iterable to array
        Row[] rowArray = Iterables.toArray(rows, Row.class);

        // Sort array using dates
        Arrays
            .sort(
                rowArray,
                Comparator
                .comparingLong(row -> formatter.parseDateTime(row.getString("date")).getMillis())
        );

        // Store the last amount
        Double lastAmount = 0.0;

        // Create a List for storing sorted and filled rows
        List<Row> resultRows = new ArrayList<>(rowArray.length);

        // Iterate over the array and fill in the missing parts
        for (Row row : rowArray) {

            // Get current amount
            Double currentAmount = row.getDouble("amount");

            // If null, fill the previous value and add to results, 
            // otherwise add as it is
            resultRows.add(...);
        }

        // Output using the output receiver
        outputReceiver
            .output(
                KV.of(element.getKey(), resultRows)
            )
        );
    }
}

Apache Beam - Aggregate date from beginning to logged timestamps

copy iconCopydownload iconDownload
    public interface RollingMinMaxOptions extends PipelineOptions {
        @Description("Topic to read from")
        @Default.String("projects/pubsub-public-data/topics/taxirides-realtime")
        String getTopic();

        void setTopic(String value);
    }

    public static class MinMax extends Combine.CombineFn<Float, KV<Float, Float>, KV<Float, Float>> { //Types: Input, Accum, Output
        @Override
        public KV<Float, Float> createAccumulator() {
            KV<Float, Float> start = KV.of(Float.POSITIVE_INFINITY, 0f);
            return start;
        }

        @Override
        public KV<Float, Float> addInput(KV<Float, Float> accumulator, Float input) {
            Float max = Math.max(accumulator.getValue(), input);
            Float min = Math.min(accumulator.getKey(), input);
            return KV.of(min, max);
        }

        @Override
        public KV<Float, Float> mergeAccumulators(Iterable<KV<Float, Float>> accumulators) {
            Float max = 0f;
            Float min = Float.POSITIVE_INFINITY;
            for (KV<Float, Float> kv : accumulators) {
                max = Math.max(kv.getValue(), max);
                min = Math.min(kv.getKey(), min);
            }
            return KV.of(min, max);
        }

        @Override
        public KV<Float, Float> extractOutput(KV<Float, Float> accumulator) {
            return accumulator;

        }
    }

    public static void main(String[] args) {
        RollingMinMaxOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(RollingMinMaxOptions.class);

        Pipeline p = Pipeline.create(options);

        p
                .apply("ReadFromPubSub", PubsubIO.readStrings().fromTopic(options.getTopic()))
                .apply("Get meter reading", ParDo.of(new DoFn<String, Float>() {
                            @ProcessElement
                            public void processElement(ProcessContext c) throws ParseException {
                                JSONObject json = new JSONObject(c.element());

                                String rideStatus = json.getString("ride_status");
                                Float meterReading = json.getFloat("meter_reading");

                                if (rideStatus.equals("dropoff") && meterReading > 0){
                                    c.output(meterReading);
                                }
                            }
                        })
                )
                .apply(Window.<Float>into(
                        new GlobalWindows())
                        .triggering(Repeatedly.forever(
                                AfterPane.elementCountAtLeast(1)
                            )
                        )
                        .withTimestampCombiner(TimestampCombiner.LATEST)
                        .accumulatingFiredPanes()
                )
                .apply(Combine.globally(new MinMax()))
                .apply("Format", ParDo.of(new DoFn<KV<Float, Float>, TableRow>() {
                    @ProcessElement
                    public void processElement(ProcessContext c) throws ParseException {
                        TableRow row = new TableRow();

                        row.set("min", c.element().getKey());
                        row.set("max", c.element().getValue());
                        row.set("timestamp", c.timestamp().toString());

                        LOG.info(row.toString());
                        c.output(row);
                    }
                })
        );

        p.run();
    }

How do I set the coder for a PCollection&lt;List&lt;String&gt;&gt; in Apache Beam?

copy iconCopydownload iconDownload
Exception in thread "main" java.lang.IllegalStateException: Unable to return a default Coder for Traverse Json tree/MapElements/Map/ParMultiDo(Anonymous).output [PCollection@1324829744]. 
...
...
Inferring a Coder from the CoderRegistry failed: Unable to provide a Coder 
for com.fasterxml.jackson.databind.JsonNode.
Building a Coder using a registered CoderProvider failed.
public class JsonNodeCoder extends CustomCoder<JsonNode> {

    @Override
    public void encode(JsonNode node, OutputStream outStream) throws IOException {
        ObjectMapper mapper = new ObjectMapper();
        String nodeString = mapper.writeValueAsString(node);
        outStream.write(nodeString.getBytes());
    }

    @Override
    public JsonNode decode(InputStream inStream) throws IOException {
        byte[] bytes = IOUtils.toByteArray(inStream);
        ObjectMapper mapper = new ObjectMapper();
        String json = new String(bytes);
        return mapper.readTree(json);
    }
}
-----------------------
Exception in thread "main" java.lang.IllegalStateException: Unable to return a default Coder for Traverse Json tree/MapElements/Map/ParMultiDo(Anonymous).output [PCollection@1324829744]. 
...
...
Inferring a Coder from the CoderRegistry failed: Unable to provide a Coder 
for com.fasterxml.jackson.databind.JsonNode.
Building a Coder using a registered CoderProvider failed.
public class JsonNodeCoder extends CustomCoder<JsonNode> {

    @Override
    public void encode(JsonNode node, OutputStream outStream) throws IOException {
        ObjectMapper mapper = new ObjectMapper();
        String nodeString = mapper.writeValueAsString(node);
        outStream.write(nodeString.getBytes());
    }

    @Override
    public JsonNode decode(InputStream inStream) throws IOException {
        byte[] bytes = IOUtils.toByteArray(inStream);
        ObjectMapper mapper = new ObjectMapper();
        String json = new String(bytes);
        return mapper.readTree(json);
    }
}

Automatically running CMD after python script running::

copy iconCopydownload iconDownload
import os
os.chdir("C:/path/to/the/file") ## Do this if you're not currently in the directory that contains the .exe file
os.system("start NAMEOFAPP.exe") ## This command runs the app

Community Discussions

Trending Discussions on beam
  • Couchbase with Azure Linux VM
  • Colab: (0) UNIMPLEMENTED: DNN library is not found
  • Apache Beam Performance Between Python Vs Java Running on GCP Dataflow
  • Apache Beam Cloud Dataflow Streaming Stuck Side Input
  • Access Apache Beam metrics values during pipeline run in python?
  • Type error with simple where-clause with Haskell's beam
  • Tensorflow Object Detection API taking forever to install in a Google Colab and failing
  • Apache Beam update current row values based on the values from previous row
  • Apache Beam - Aggregate date from beginning to logged timestamps
  • How do I set the coder for a PCollection&lt;List&lt;String&gt;&gt; in Apache Beam?
Trending Discussions on beam

QUESTION

Couchbase with Azure Linux VM

Asked 2022-Feb-14 at 08:37

I installed ubuntu server VM on Azure there I installed couchbase community edition on now i need to access the couchbase using dotnet SDK but code gives me bucket not found or unreachable error. even i try configuring a public dns and gave it as ip during cluster creation but still its giving the same. even i added public dns to the host file like below 127.0.0.1 public dns The SDK log includes below 2 statements Attempted bootstrapping on endpoint "name.eastus.cloudapp.azure.com" has failed. (e80489ed) A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

SDK Doctor Log:

09:51:20.331 INFO ▶ Parsing connection string `couchbases://hsotname.eastus.cloudapp.azure.com/travel-sample`
09:51:20.334 INFO ▶ Connection string was parsed as a potential DNS SRV record
09:51:31.316 INFO ▶ Connection string specifies to use secured connections
09:51:31.316 INFO ▶ Connection string identifies the following CCCP endpoints:
09:51:31.316 INFO ▶   1. hsotname.eastus.cloudapp.azure.com:11207
09:51:31.316 INFO ▶ Connection string identifies the following HTTP endpoints:
09:51:31.316 INFO ▶   1. hsotname.eastus.cloudapp.azure.com:18091
09:51:31.316 INFO ▶ Connection string specifies bucket `travel-sample`
09:51:31.316 WARN ▶ No certificate authority file specified (--tls-ca), skipping server certificate verification for this run.
09:51:42.453 WARN ▶ Your connection string specifies only a single host.  You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
09:51:42.462 INFO ▶ Performing DNS lookup for host `hsotname.eastus.cloudapp.azure.com`
09:51:42.462 INFO ▶ Bootstrap host `hsotname.eastus.cloudapp.azure.com` refers to a server with the address `13.82.80.55`
09:51:42.462 INFO ▶ Attempting to connect to cluster via CCCP
09:51:42.463 INFO ▶ Attempting to fetch config via cccp from `hsotname.eastus.cloudapp.azure.com:11207`
09:51:44.474 ERRO ▶ Failed to fetch configuration via cccp from `hsotname.eastus.cloudapp.azure.com:11207` (error: dial tcp 13.82.80.55:11207: i/o timeout)
09:51:44.474 INFO ▶ Attempting to connect to cluster via HTTP (Terse)
09:51:44.474 INFO ▶ Attempting to fetch terse config via http from `hsotname.eastus.cloudapp.azure.com:18091`
09:51:46.480 ERRO ▶ Failed to fetch terse configuration via http from `hsotname.eastus.cloudapp.azure.com:18091` (error: Get "http://hsotname.eastus.cloudapp.azure.com:18091/pools/default/b/travel-sample": context deadline exceeded (Client.Timeout exceeded while awaiting headers))
09:51:46.480 INFO ▶ Attempting to connect to cluster via HTTP (Full)
09:51:46.480 INFO ▶ Failed to connect via HTTP (Full), as it is not yet supported by the doctor
09:51:46.481 INFO ▶ Selected the following network type:
09:51:46.481 ERRO ▶ All endpoints specified by your connection string were unreachable, further cluster diagnostics are not possible
09:51:46.481 INFO ▶ Diagnostics completed

Summary:
←[33m[WARN]←[0m No certificate authority file specified (--tls-ca), skipping server certificate verification for this run.
←[33m[WARN]←[0m Your connection string specifies only a single host.  You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
←[31m[ERRO]←[0m Failed to fetch configuration via cccp from `hsotname.eastus.cloudapp.azure.com:11207` (error: dial tcp 13.82.80.55:11207: i/o timeout)
←[31m[ERRO]←[0m Failed to fetch terse configuration via http from `hsotname.eastus.cloudapp.azure.com:18091` (error: Get "http://hsotname.eastus.cloudapp.azure.com:18091/pools/default/b/travel-sample": context deadline exceeded (Client.Timeout exceeded while awaiting headers))
←[31m[ERRO]←[0m All endpoints specified by your connection string were unreachable, further cluster diagnostics are not possible

Found multiple issues, see listing above.

Both 18091 and 11207 port i added as inbound rule. my ufw status is inactive The above mentioned 2 ports are not listening

couchbaseadm@couchbasedbserver:~$ sudo lsof -i -P -n | grep LISTEN

systemd-r   926 systemd-resolve   13u  IPv4   18715      0t0  TCP 127.0.0.53:53 (LISTEN)
sshd       1103            root    3u  IPv4   21086      0t0  TCP *:22 (LISTEN)
sshd       1103            root    4u  IPv6   21088      0t0  TCP *:22 (LISTEN)
beam.smp   6323       couchbase   17u  IPv4 3937812      0t0  TCP 127.0.0.1:21200 (LISTEN)
epmd       6354       couchbase    3u  IPv4 3937267      0t0  TCP *:4369 (LISTEN)
epmd       6354       couchbase    4u  IPv6 3937268      0t0  TCP *:4369 (LISTEN)
beam.smp   6465       couchbase   34u  IPv4 3943391      0t0  TCP *:21100 (LISTEN)
beam.smp   6465       couchbase   48u  IPv4 3938657      0t0  TCP *:8091 (LISTEN)
beam.smp   6514       couchbase   17u  IPv4 3938608      0t0  TCP 127.0.0.1:21300 (LISTEN)
beam.smp   6514       couchbase   27u  IPv4 3938628      0t0  TCP *:8092 (LISTEN)
prometheu  6563       couchbase    9u  IPv4 3938650      0t0  TCP 127.0.0.1:9123 (LISTEN)
goxdcr     6583       couchbase   11u  IPv4 3938705      0t0  TCP 127.0.0.1:9998 (LISTEN)
memcached  6592       couchbase    5u  IPv4 3938689      0t0  TCP 127.0.0.1:11280 (LISTEN)
memcached  6592       couchbase   12u  IPv4 3937931      0t0  TCP *:11210 (LISTEN)
memcached  6592       couchbase   13u  IPv4 3937932      0t0  TCP *:11209 (LISTEN)
memcached  6592       couchbase   14u  IPv6 3937933      0t0  TCP *:11210 (LISTEN)
memcached  6592       couchbase   15u  IPv6 3937934      0t0  TCP *:11209 (LISTEN)
indexer    6741       couchbase   16u  IPv4 3944492      0t0  TCP *:9101 (LISTEN)
indexer    6741       couchbase   19u  IPv4 3944066      0t0  TCP *:9100 (LISTEN)
indexer    6741       couchbase   20u  IPv4 3944500      0t0  TCP *:9102 (LISTEN)
indexer    6741       couchbase   69u  IPv4 3946013      0t0  TCP *:9105 (LISTEN)
projector  6762       couchbase    9u  IPv4 3944075      0t0  TCP *:9999 (LISTEN)
cbq-engin  6782       couchbase    7u  IPv6 3944534      0t0  TCP *:8093 (LISTEN)
cbq-engin  6782       couchbase    8u  IPv4 3944535      0t0  TCP *:8093 (LISTEN)
cbft       6799       couchbase    8u  IPv4 3944112      0t0  TCP *:9130 (LISTEN)
cbft       6799       couchbase    9u  IPv4 3944149      0t0  TCP *:8094 (LISTEN)
sync_gate 11950    sync_gateway    8u  IPv4 4119414      0t0  TCP 127.0.0.1:4985 (LISTEN)
sync_gate 11950    sync_gateway    9u  IPv6 4119422      0t0  TCP *:4984 (LISTEN)

Here is the stacktrace:

StackTrace " at Couchbase.Core.ClusterContext.d__58.MoveNext()\r\n
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n
at System.Threading.Tasks.ValueTask1.get_Result()\r\n at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable1.ConfiguredValueTaskAwaiter.GetResult()\r\n
at Couchbase.Cluster.<>c__DisplayClass30_0.<b__0>d.MoveNext()\r\n
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n
at System.Threading.Tasks.ValueTask1.get_Result()\r\n at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable1.ConfiguredValueTaskAwaiter.GetResult()\r\n

Dotnet SDK Log

2022-02-09T17:28:46.3409884+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:28:48.8643285+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:28:48.8649060+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:28:51.3664735+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:28:51.3667541+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:28:53.8811651+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:28:53.8814100+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:28:56.3823825+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:28:56.3826183+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:28:58.8964320+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:28:58.8967224+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:01.4007664+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:01.4010274+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:01.7019750+05:30  [INF] Error trying to retrieve DNS SRV entries. (addddf06)
DnsClient.DnsResponseException: Query 12389 => _couchbases._tcp.hsotname.eastus.cloudapp.azure.com IN SRV on 192.168.8.1:53 timed out or is a transient error.
 ---> System.OperationCanceledException: The operation was canceled.
   at System.Threading.Tasks.TaskExtensions.WithCancellation[T](Task`1 task, CancellationToken cancellationToken, Action onCancel)
   at DnsClient.LookupClient.ResolveQueryAsync(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   at DnsClient.LookupClient.ResolveQueryAsync(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit, CancellationToken cancellationToken)
   at DnsClient.LookupClient.QueryInternalAsync(DnsQuestion question, DnsQuerySettings queryOptions, IReadOnlyCollection`1 servers, CancellationToken cancellationToken)
   at Couchbase.DnsClientDnsResolver.GetDnsSrvEntriesAsync(Uri bootstrapUri, CancellationToken cancellationToken)
   at Couchbase.Core.ClusterContext.BootstrapGlobalAsync()
2022-02-09T17:29:01.7034867+05:30  [DBG] Bootstrapping with node "hsotname.eastus.cloudapp.azure.com" (98ca0e33)
2022-02-09T17:29:03.9124149+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:03.9127285+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:06.4201295+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:06.4205385+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:08.9317820+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:08.9320832+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:11.4459313+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:11.4463142+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:12.1488979+05:30  [DBG] Attempted bootstrapping on endpoint "hsotname.eastus.cloudapp.azure.com" has failed. (e80489ed)
System.Net.Sockets.SocketException (10060): A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
   at Couchbase.Core.IO.Connections.ConnectionFactory.CreateAndConnectAsync(HostEndpointWithPort hostEndpoint, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.CreateConnectionAsync(CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.AddConnectionsAsync(Int32 count, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.InitializeAsync(CancellationToken cancellationToken)
   at Couchbase.Core.ClusterNode.InitializeAsync()
   at Couchbase.Core.DI.ClusterNodeFactory.CreateAndConnectAsync(HostEndpointWithPort endPoint, BucketType bucketType, NodeAdapter nodeAdapter, CancellationToken cancellationToken)
   at Couchbase.Core.ClusterContext.BootstrapGlobalAsync()
2022-02-09T17:29:33.3259787+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:33.3262710+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:35.8341848+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:35.8343993+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:36.9552318+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:36.9596725+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:37.0170984+05:30  [INF] Cannot bootstrap bucket "travel-sample" as Couchbase. (1ecb21a9)
System.IO.IOException: The operation is not allowed on non-connected sockets.
   at System.Net.Sockets.NetworkStream..ctor(Socket socket, FileAccess access, Boolean ownsSocket)
   at Couchbase.Core.IO.Connections.ConnectionFactory.CreateAndConnectAsync(HostEndpointWithPort hostEndpoint, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.CreateConnectionAsync(CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.AddConnectionsAsync(Int32 count, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.InitializeAsync(CancellationToken cancellationToken)
   at Couchbase.Core.ClusterNode.InitializeAsync()
   at Couchbase.Core.DI.ClusterNodeFactory.CreateAndConnectAsync(HostEndpointWithPort endPoint, BucketType bucketType, NodeAdapter nodeAdapter, CancellationToken cancellationToken)
   at Couchbase.Core.ClusterContext.CreateAndBootStrapBucketAsync(String name, HostEndpointWithPort endpoint, BucketType type)
   at Couchbase.Core.ClusterContext.GetOrCreateBucketAsync(String name)
2022-02-09T17:29:38.3360012+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:38.3361875+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:40.3244490+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:40.3507801+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:40.3525230+05:30  [INF] Cannot bootstrap bucket "travel-sample" as Memcached. (1ecb21a9)
System.IO.IOException: The operation is not allowed on non-connected sockets.
   at System.Net.Sockets.NetworkStream..ctor(Socket socket, FileAccess access, Boolean ownsSocket)
   at Couchbase.Core.IO.Connections.ConnectionFactory.CreateAndConnectAsync(HostEndpointWithPort hostEndpoint, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.CreateConnectionAsync(CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.AddConnectionsAsync(Int32 count, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.InitializeAsync(CancellationToken cancellationToken)
   at Couchbase.Core.ClusterNode.InitializeAsync()
   at Couchbase.Core.DI.ClusterNodeFactory.CreateAndConnectAsync(HostEndpointWithPort endPoint, BucketType bucketType, NodeAdapter nodeAdapter, CancellationToken cancellationToken)
   at Couchbase.Core.ClusterContext.CreateAndBootStrapBucketAsync(String name, HostEndpointWithPort endpoint, BucketType type)
   at Couchbase.Core.ClusterContext.GetOrCreateBucketAsync(String name)
2022-02-09T17:29:40.8385667+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:40.8387609+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:43.3380840+05:30  [DBG] Done waiting, polling... (93018145)
2022-02-09T17:29:43.3382393+05:30  [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2022-02-09T17:29:43.6633010+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:43.6842924+05:30  [DBG] Setting TCP Keep-Alives using SocketOptions - enable keep-alives True, time 00:01:00, interval 00:00:01. (d66a37aa)
2022-02-09T17:29:43.6862758+05:30  [INF] Cannot bootstrap bucket "travel-sample" as Ephemeral. (1ecb21a9)
System.IO.IOException: The operation is not allowed on non-connected sockets.
   at System.Net.Sockets.NetworkStream..ctor(Socket socket, FileAccess access, Boolean ownsSocket)
   at Couchbase.Core.IO.Connections.ConnectionFactory.CreateAndConnectAsync(HostEndpointWithPort hostEndpoint, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.CreateConnectionAsync(CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.AddConnectionsAsync(Int32 count, CancellationToken cancellationToken)
   at Couchbase.Core.IO.Connections.DataFlow.DataFlowConnectionPool.InitializeAsync(CancellationToken cancellationToken)
   at Couchbase.Core.ClusterNode.InitializeAsync()
   at Couchbase.Core.DI.ClusterNodeFactory.CreateAndConnectAsync(HostEndpointWithPort endPoint, BucketType bucketType, NodeAdapter nodeAdapter, CancellationToken cancellationToken)
   at Couchbase.Core.ClusterContext.CreateAndBootStrapBucketAsync(String name, HostEndpointWithPort endpoint, BucketType type)
   at Couchbase.Core.ClusterContext.GetOrCreateBucketAsync(String name)

Thanks!!

ANSWER

Answered 2022-Feb-11 at 17:23

Thank you for providing so much detailed information! I suspect the immediate issue is that you are trying to connect using TLS, which is not supported by Couchbase Community Edition (at least not as of February 2022). Ports 11207 and 18091 are for TLS connections; as you observed in the lsof output, the server is not listening on those ports.

Source https://stackoverflow.com/questions/71059720

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install beam

To learn how to write Beam pipelines, read the Quickstart for [Java, Python, or Go] available on our website.

Support

To get involved in Apache Beam:. Instructions for building and testing Beam itself are in the contribution guide.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with beam
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.