kandi background
Explore Kits

rethinkdb | The open-source database for the realtime web | Database library

 by   rethinkdb C++ Version: v2.4.1 License: Apache-2.0

 by   rethinkdb C++ Version: v2.4.1 License: Apache-2.0

Download this library from

kandi X-RAY | rethinkdb Summary

rethinkdb is a C++ library typically used in Database, Nodejs, MongoDB applications. rethinkdb has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
Open-source database for building realtime web applications. NoSQL database that stores schemaless JSON documents. Distributed database that is easy to scale. High availability database with automatic failover and robust fault tolerance. RethinkDB is the first open-source scalable database built for realtime applications. It exposes a new database access model, in which the developer can tell the database to continuously push updated query results to applications without polling for changes. RethinkDB allows developers to build scalable realtime apps in a fraction of the time with less effort. To learn more, check out [rethinkdb.com](https://rethinkdb.com).
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • rethinkdb has a medium active ecosystem.
  • It has 24920 star(s) with 1820 fork(s). There are 818 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 1409 open issues and 4917 have been closed. On average issues are closed in 202 days. There are 23 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of rethinkdb is v2.4.1
rethinkdb Support
Best in #Database
Average in #Database
rethinkdb Support
Best in #Database
Average in #Database

quality kandi Quality

  • rethinkdb has no bugs reported.
rethinkdb Quality
Best in #Database
Average in #Database
rethinkdb Quality
Best in #Database
Average in #Database

securitySecurity

  • rethinkdb has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
rethinkdb Security
Best in #Database
Average in #Database
rethinkdb Security
Best in #Database
Average in #Database

license License

  • rethinkdb is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
rethinkdb License
Best in #Database
Average in #Database
rethinkdb License
Best in #Database
Average in #Database

buildReuse

  • rethinkdb releases are available to install and integrate.
  • Installation instructions, examples and code snippets are available.
rethinkdb Reuse
Best in #Database
Average in #Database
rethinkdb Reuse
Best in #Database
Average in #Database
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

rethinkdb Key Features

Open-source database for building realtime web applications

NoSQL database that stores schemaless JSON documents

Distributed database that is easy to scale

High availability database with automatic failover and robust fault tolerance

Build a [realtime liveblog](https://rethinkdb.com/blog/rethinkdb-pubnub/) with RethinkDB and PubNub

Create a [collaborative photo sharing whiteboard](https://www.youtube.com/watch?v=pdPRp3UxL_s)

Build an [IRC bot in Go](https://rethinkdb.com/blog/go-irc-bot/) with RethinkDB changefeeds

Look at [cats on Instagram in realtime](https://rethinkdb.com/blog/cats-of-instagram/)

Watch [how Platzi uses RethinkDB](https://www.youtube.com/watch?v=Nb_UzRYDB40) to educate

Building

copy iconCopydownload iconDownload
sudo apt-get install build-essential protobuf-compiler python \
    libprotobuf-dev libcurl4-openssl-dev libboost-all-dev \
    libncurses5-dev libjemalloc-dev wget m4 g++ libssl-dev

How to delete a set of data based on group in RethinkDB with Python

copy iconCopydownload iconDownload
from rethinkdb import RethinkDB
from faker import Faker
from faker_music import MusicProvider
from random import random
from time import sleep

fake = Faker()
fake.add_provider(MusicProvider)
r = RethinkDB()

conn = r.connect( "localhost", 28015)

try:
  r.db("test").table_drop("instruments").run(conn)
except:
  pass

r.db("test").table_create("instruments").run(conn)

def instrument()->dict:
  instrument = {"name":fake.music_instrument(),"category":fake.music_instrument_category()}
  return instrument

initial = [instrument() for _ in range(3)]
r.table("instruments").insert(initial).run(conn)


while True:
  check = random()
  if check < 0.5 and check >0.25:
    r.table("instruments").insert(instrument()).run(conn)
  counts ={}
  counts = dict(r.table("instruments").group("category").count().run(conn))

  to_rm =[]
  for category, count in counts.items():
    if count >2:
      r.table("instruments").filter( {"category":category}).delete().run(conn)

  sleep(1)

How to check if table exists in RethinkDB with Python

copy iconCopydownload iconDownload
try:
  r.db("test").table_drop("instruments").run()
except:
  pass

r.db("test").table_create("instruments").run()

NodeJS and RethinkDB - How to handle connection interruption (connection retry) when listening for table changes (realtime)?

copy iconCopydownload iconDownload
// write every change to file

function onChange(err, cursor) {
  // log every change to console
  // Change format: https://rethinkdb.com/docs/changefeeds/javascript/
  cursor.each(console.log);

  // if file not open, open file 
  openFile()

  // write changes to file
  cursor.each(writeChangesToFile)

  // if you decide you no longer want these changes
  if (stopChangesNow) {
    cursor.close() // https://rethinkdb.com/api/javascript/#close-cursor
    cancel()
  }
}

// stop writing changes

function cancel(stream) {
  // close file opened above
  closeFile()
}

try {
  r.table('users').changes().run(conn, onChange)
} catch(e) {
  cancel()
}

How to check the type of a document element (sub document vs list)? &quot;ReqlServerCompileError: Variable name not found in: var_1&quot;

copy iconCopydownload iconDownload
def unwind(self, path):
    items = path.split('.')
    cursor = self._cursor
    r = self._f._r
    for item in items:
        cursor = r.branch(
            cursor[item][0].type_of() == 'ARRAY',
            cursor.concat_map(lambda row: row[item]),
            cursor[item]
        )

    return self.wrap(self._f, cursor)

Get second last value in each row of dataframe, R

copy iconCopydownload iconDownload
first_job <- function(x)   {x1 <- x[!is.na(x)];x1[length(x1) - 1][1]}
apply(data[-(1:4)], 1, first_job)

#[1] "PhD fellow"               "Java developer Intern"    "Optical Engineer" 
#[4] "Senior DWH&BI Engineer"   "Senior Software Engineer" "Software Developer"
-----------------------
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice_tail()


# A tibble: 57 x 6
# Groups:   Index [57]
   Index FromJob          Highest_education_achiev~ Skills                                     name  value        
   <dbl> <chr>            <chr>                     <chr>                                      <chr> <chr>        
 1     1 Senior Machine ~ PhD                       "Machine Learning, Mathematical Modeling,~ Job5  PhD fellow   
 2     2 Senior Machine ~ MSc Computer Science      "Java, AngularJS, frontend, backend, Azur~ Job5  Java develop~
 3     3 Senior Machine ~ MSc                       "Biometrics, Machine Learning, Pattern Re~ Job4  Optical Engi~
 4     4 Senior Machine ~ MBA                       "Databricks, Spark, Airflow, AWS Sagemake~ Job2  Senior DWH&B~
 5     5 Senior Machine ~ MSc                       "Spark, Tensorflow &TFX, Kubeflow, BigQue~ Job5  Senior Softw~
 6     6 Python Data Eng~ MSc                       "PythonC++, C, OpenCV, OpenCL, MatLab, Te~ Job3  Software Dev~
 7     7 Python Data Eng~ MSc                       "Microsoft SQL Server, Hadoop,SQL Server ~ Job2  Data Engineer
 8     8 Python Data Eng~ MSc Communication and Me~ "Keras, TensorFlow, scikit-learn, NLTK, O~ Job2  Application ~
 9     9 Lead Backend De~ High School               "ElasticSearch, OOP, NoSQL, SQL, Docker, ~ Job3  Software Dev~
10    10 Lead Backend De~ BSc Informatics           "PHP, Java Script, CSS, (X)HTML, MySQL., ~ Job3  Senior Web D~
# ... with 47 more rows
#Ex for 2nd job

df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  filter(rev(row_number()) == 2)
#> # A tibble: 57 x 6
#> # Groups:   Index [57]
#>    Index FromJob     Highest_education_a~ Skills             name  value        
#>    <dbl> <chr>       <chr>                <chr>              <chr> <chr>        
#>  1     1 Senior Mac~ PhD                  "Machine Learning~ Job4  Research Sci~
#>  2     2 Senior Mac~ MSc Computer Science "Java, AngularJS,~ Job4  Analytics An~
#>  3     3 Senior Mac~ MSc                  "Biometrics, Mach~ Job3  Senior Devel~
#>  4     4 Senior Mac~ MBA                  "Databricks, Spar~ Job1~ Senior Data ~
#>  5     5 Senior Mac~ MSc                  "Spark, Tensorflo~ Job4  Graduate Tea~
#>  6     6 Python Dat~ MSc                  "PythonC++, C, Op~ Job2  Software Dev~
#>  7     7 Python Dat~ MSc                  "Microsoft SQL Se~ Job1~ Machine Lear~
#>  8     8 Python Dat~ MSc Communication a~ "Keras, TensorFlo~ Job1~ Akademischer~
#>  9     9 Lead Backe~ High School          "ElasticSearch, O~ Job2  Backend Deve~
#> 10    10 Lead Backe~ BSc Informatics      "PHP, Java Script~ Job2  Lead PHP Dev~
#> # ... with 47 more rows
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice(n() -1)
-----------------------
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice_tail()


# A tibble: 57 x 6
# Groups:   Index [57]
   Index FromJob          Highest_education_achiev~ Skills                                     name  value        
   <dbl> <chr>            <chr>                     <chr>                                      <chr> <chr>        
 1     1 Senior Machine ~ PhD                       "Machine Learning, Mathematical Modeling,~ Job5  PhD fellow   
 2     2 Senior Machine ~ MSc Computer Science      "Java, AngularJS, frontend, backend, Azur~ Job5  Java develop~
 3     3 Senior Machine ~ MSc                       "Biometrics, Machine Learning, Pattern Re~ Job4  Optical Engi~
 4     4 Senior Machine ~ MBA                       "Databricks, Spark, Airflow, AWS Sagemake~ Job2  Senior DWH&B~
 5     5 Senior Machine ~ MSc                       "Spark, Tensorflow &TFX, Kubeflow, BigQue~ Job5  Senior Softw~
 6     6 Python Data Eng~ MSc                       "PythonC++, C, OpenCV, OpenCL, MatLab, Te~ Job3  Software Dev~
 7     7 Python Data Eng~ MSc                       "Microsoft SQL Server, Hadoop,SQL Server ~ Job2  Data Engineer
 8     8 Python Data Eng~ MSc Communication and Me~ "Keras, TensorFlow, scikit-learn, NLTK, O~ Job2  Application ~
 9     9 Lead Backend De~ High School               "ElasticSearch, OOP, NoSQL, SQL, Docker, ~ Job3  Software Dev~
10    10 Lead Backend De~ BSc Informatics           "PHP, Java Script, CSS, (X)HTML, MySQL., ~ Job3  Senior Web D~
# ... with 47 more rows
#Ex for 2nd job

df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  filter(rev(row_number()) == 2)
#> # A tibble: 57 x 6
#> # Groups:   Index [57]
#>    Index FromJob     Highest_education_a~ Skills             name  value        
#>    <dbl> <chr>       <chr>                <chr>              <chr> <chr>        
#>  1     1 Senior Mac~ PhD                  "Machine Learning~ Job4  Research Sci~
#>  2     2 Senior Mac~ MSc Computer Science "Java, AngularJS,~ Job4  Analytics An~
#>  3     3 Senior Mac~ MSc                  "Biometrics, Mach~ Job3  Senior Devel~
#>  4     4 Senior Mac~ MBA                  "Databricks, Spar~ Job1~ Senior Data ~
#>  5     5 Senior Mac~ MSc                  "Spark, Tensorflo~ Job4  Graduate Tea~
#>  6     6 Python Dat~ MSc                  "PythonC++, C, Op~ Job2  Software Dev~
#>  7     7 Python Dat~ MSc                  "Microsoft SQL Se~ Job1~ Machine Lear~
#>  8     8 Python Dat~ MSc Communication a~ "Keras, TensorFlo~ Job1~ Akademischer~
#>  9     9 Lead Backe~ High School          "ElasticSearch, O~ Job2  Backend Deve~
#> 10    10 Lead Backe~ BSc Informatics      "PHP, Java Script~ Job2  Lead PHP Dev~
#> # ... with 47 more rows
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice(n() -1)
-----------------------
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice_tail()


# A tibble: 57 x 6
# Groups:   Index [57]
   Index FromJob          Highest_education_achiev~ Skills                                     name  value        
   <dbl> <chr>            <chr>                     <chr>                                      <chr> <chr>        
 1     1 Senior Machine ~ PhD                       "Machine Learning, Mathematical Modeling,~ Job5  PhD fellow   
 2     2 Senior Machine ~ MSc Computer Science      "Java, AngularJS, frontend, backend, Azur~ Job5  Java develop~
 3     3 Senior Machine ~ MSc                       "Biometrics, Machine Learning, Pattern Re~ Job4  Optical Engi~
 4     4 Senior Machine ~ MBA                       "Databricks, Spark, Airflow, AWS Sagemake~ Job2  Senior DWH&B~
 5     5 Senior Machine ~ MSc                       "Spark, Tensorflow &TFX, Kubeflow, BigQue~ Job5  Senior Softw~
 6     6 Python Data Eng~ MSc                       "PythonC++, C, OpenCV, OpenCL, MatLab, Te~ Job3  Software Dev~
 7     7 Python Data Eng~ MSc                       "Microsoft SQL Server, Hadoop,SQL Server ~ Job2  Data Engineer
 8     8 Python Data Eng~ MSc Communication and Me~ "Keras, TensorFlow, scikit-learn, NLTK, O~ Job2  Application ~
 9     9 Lead Backend De~ High School               "ElasticSearch, OOP, NoSQL, SQL, Docker, ~ Job3  Software Dev~
10    10 Lead Backend De~ BSc Informatics           "PHP, Java Script, CSS, (X)HTML, MySQL., ~ Job3  Senior Web D~
# ... with 47 more rows
#Ex for 2nd job

df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  filter(rev(row_number()) == 2)
#> # A tibble: 57 x 6
#> # Groups:   Index [57]
#>    Index FromJob     Highest_education_a~ Skills             name  value        
#>    <dbl> <chr>       <chr>                <chr>              <chr> <chr>        
#>  1     1 Senior Mac~ PhD                  "Machine Learning~ Job4  Research Sci~
#>  2     2 Senior Mac~ MSc Computer Science "Java, AngularJS,~ Job4  Analytics An~
#>  3     3 Senior Mac~ MSc                  "Biometrics, Mach~ Job3  Senior Devel~
#>  4     4 Senior Mac~ MBA                  "Databricks, Spar~ Job1~ Senior Data ~
#>  5     5 Senior Mac~ MSc                  "Spark, Tensorflo~ Job4  Graduate Tea~
#>  6     6 Python Dat~ MSc                  "PythonC++, C, Op~ Job2  Software Dev~
#>  7     7 Python Dat~ MSc                  "Microsoft SQL Se~ Job1~ Machine Lear~
#>  8     8 Python Dat~ MSc Communication a~ "Keras, TensorFlo~ Job1~ Akademischer~
#>  9     9 Lead Backe~ High School          "ElasticSearch, O~ Job2  Backend Deve~
#> 10    10 Lead Backe~ BSc Informatics      "PHP, Java Script~ Job2  Lead PHP Dev~
#> # ... with 47 more rows
df %>% select(1:4, starts_with('Job')) %>%
  pivot_longer(starts_with('Job'), values_drop_na = T) %>%
  group_by(Index) %>%
  slice(n() -1)

Nodejs - rethinkdb Unhandled rejection ReqlDriverError: First argument to `run` must be an open connection

copy iconCopydownload iconDownload
const r = require('rethinkdb');
r.connect( {host: 'localhost', port: 28015}, function(err, conn) {
    if (err) throw err;

    r.db('test').tableCreate('authors').run(conn, function(err, result) {
        if (err) throw err;
        console.log(JSON.stringify(result, null, 2));
    });
});
const r = require('rethinkdb');
r.connect({host: 'localhost', port: 28015})
    .then(function(conn) {
        return r.db('test').tableCreate('authors').run(conn);
    })
    .then(function(result) {
        console.log(JSON.stringify(result, null, 2));
    });
-----------------------
const r = require('rethinkdb');
r.connect( {host: 'localhost', port: 28015}, function(err, conn) {
    if (err) throw err;

    r.db('test').tableCreate('authors').run(conn, function(err, result) {
        if (err) throw err;
        console.log(JSON.stringify(result, null, 2));
    });
});
const r = require('rethinkdb');
r.connect({host: 'localhost', port: 28015})
    .then(function(conn) {
        return r.db('test').tableCreate('authors').run(conn);
    })
    .then(function(result) {
        console.log(JSON.stringify(result, null, 2));
    });

Using secondary index for .lt() with .filter() in RethinkDB

copy iconCopydownload iconDownload
r.db("databasename")
  .table("tablename")
  .between(r.minval, new Date(), { index: "createdAt" })

FastAPI not raising HTTPException

copy iconCopydownload iconDownload
@router.post("/brands", response_model=Brand, status_code=status.HTTP_201_CREATED)
def add_brand(brand: Brand):
    with r.connect('localhost', 28015, 'expressparts').repl() as conn:
        result = r.table("brands").insert({
            "id": brand.id,
            "name": brand.name}).run(conn)
        if result['errors'] > 0:
            error = result['first_error'].split(":")[0]
            raise HTTPException(
                status_code=400, detail=f"Error raised: {error}")
        else:
            return brand

how remove item with attrib repeat rethinkdb

copy iconCopydownload iconDownload
let datas=resultQuery.reduce((arry, val)=>{
    if(arry.length){
        if(!arry.some(val2=>val2.numid===val.numid)){
            arry.push(val)
        }
    }else
        arry.push(val)
    return arry
}, [])
console.log(datas, 'FT array filter', __filename)
-----------------------
r.db('myDb').table('userSearchData')
.filter(querys=>
    querys('numid').gt('1000080')
    .and(
        querys('numid').lt(String('1000085'))
    )
)
.group("numid")
.ungroup()
.map(r.row("reduction")(0))
.limit(5) 
[
    {
        "group": "1000081",
        "reduction": [
            {
                "codeQR": "100001597182620700",
                "numid": "1000081",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    },
    {
        "group": "1000082",
        "reduction": [
            {
                "codeQR": "100001597183951080",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597182749578",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597185279006",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    } 
//....
]
[
    {
        "codeQR": "100001597182620700",
        "numid": "1000081",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183951080",
        "numid": "1000082",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597185279182",
        "numid": "1000083",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183974288",
        "numid": "1000084",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    }
]
-----------------------
r.db('myDb').table('userSearchData')
.filter(querys=>
    querys('numid').gt('1000080')
    .and(
        querys('numid').lt(String('1000085'))
    )
)
.group("numid")
.ungroup()
.map(r.row("reduction")(0))
.limit(5) 
[
    {
        "group": "1000081",
        "reduction": [
            {
                "codeQR": "100001597182620700",
                "numid": "1000081",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    },
    {
        "group": "1000082",
        "reduction": [
            {
                "codeQR": "100001597183951080",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597182749578",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597185279006",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    } 
//....
]
[
    {
        "codeQR": "100001597182620700",
        "numid": "1000081",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183951080",
        "numid": "1000082",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597185279182",
        "numid": "1000083",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183974288",
        "numid": "1000084",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    }
]
-----------------------
r.db('myDb').table('userSearchData')
.filter(querys=>
    querys('numid').gt('1000080')
    .and(
        querys('numid').lt(String('1000085'))
    )
)
.group("numid")
.ungroup()
.map(r.row("reduction")(0))
.limit(5) 
[
    {
        "group": "1000081",
        "reduction": [
            {
                "codeQR": "100001597182620700",
                "numid": "1000081",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    },
    {
        "group": "1000082",
        "reduction": [
            {
                "codeQR": "100001597183951080",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597182749578",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            },
            {
                "codeQR": "100001597185279006",
                "numid": "1000082",
                "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
            }
        ]
    } 
//....
]
[
    {
        "codeQR": "100001597182620700",
        "numid": "1000081",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183951080",
        "numid": "1000082",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597185279182",
        "numid": "1000083",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    },
    {
        "codeQR": "100001597183974288",
        "numid": "1000084",
        "user": "a1d0c8d0-7305-43b1-8b4d-d9a6274d76f5"
    }
]

Retrieve list of table names

copy iconCopydownload iconDownload
Connection conn = r.connection().hostname("localhost").port(32769).connect();
List<?> tables = r.db("test").tableList().run(conn, ArrayList.class).first();
if (tables != null) {
    tables.forEach(System.out::println);
}
movies
tv_shows
-----------------------
Connection conn = r.connection().hostname("localhost").port(32769).connect();
List<?> tables = r.db("test").tableList().run(conn, ArrayList.class).first();
if (tables != null) {
    tables.forEach(System.out::println);
}
movies
tv_shows

Community Discussions

Trending Discussions on rethinkdb
  • How to delete a set of data based on group in RethinkDB with Python
  • How to check if table exists in RethinkDB with Python
  • NodeJS and RethinkDB - How to handle connection interruption (connection retry) when listening for table changes (realtime)?
  • How to check the type of a document element (sub document vs list)? &quot;ReqlServerCompileError: Variable name not found in: var_1&quot;
  • TypeError: Cannot read property 'findOne' of undefined (using mongoose)
  • Get second last value in each row of dataframe, R
  • PHP Mktime shows 2 March's when I dump foreach month
  • Nodejs - rethinkdb Unhandled rejection ReqlDriverError: First argument to `run` must be an open connection
  • Using secondary index for .lt() with .filter() in RethinkDB
  • How to fix rethinkdb connection refused problem?
Trending Discussions on rethinkdb

QUESTION

How to delete a set of data based on group in RethinkDB with Python

Asked 2022-Apr-08 at 16:10

I have a table with musical instruments and I want to split them by category count their number and if a category groups is larger than a number delete all the instruments in that group. My code currently is:

from rethinkdb import RethinkDB
from faker import Faker
from faker_music import MusicProvider
from random import random
from time import sleep

fake = Faker()
fake.add_provider(MusicProvider)
r = RethinkDB()

r.connect( "localhost", 28015).repl()

try:
  r.db("test").table_drop("instruments").run()
except:
  pass

r.db("test").table_create("instruments").run()

def instrument()->dict:
  instrument = {"name":fake.music_instrument(),"category":fake.music_instrument_category()}
  return instrument

initial = [instrument() for _ in range(3)]
r.table("instruments").insert(initial).run()

while True:
  check = random()
  if check < 0.5 and check >0.25:
    r.table("instruments").insert(instrument()).run()

  if  check < 0.25:
    cursor = r.table("instruments").group("category").count().gt(3).filter.delete().run()

  sleep(1)

where the r.table("instruments").group("category").count().gt(3).filter.delete().run() does not work but is indicative of what I am trying to achieve.

ANSWER

Answered 2022-Apr-08 at 16:10

Ok this worked:

from rethinkdb import RethinkDB
from faker import Faker
from faker_music import MusicProvider
from random import random
from time import sleep

fake = Faker()
fake.add_provider(MusicProvider)
r = RethinkDB()

conn = r.connect( "localhost", 28015)

try:
  r.db("test").table_drop("instruments").run(conn)
except:
  pass

r.db("test").table_create("instruments").run(conn)

def instrument()->dict:
  instrument = {"name":fake.music_instrument(),"category":fake.music_instrument_category()}
  return instrument

initial = [instrument() for _ in range(3)]
r.table("instruments").insert(initial).run(conn)


while True:
  check = random()
  if check < 0.5 and check >0.25:
    r.table("instruments").insert(instrument()).run(conn)
  counts ={}
  counts = dict(r.table("instruments").group("category").count().run(conn))

  to_rm =[]
  for category, count in counts.items():
    if count >2:
      r.table("instruments").filter( {"category":category}).delete().run(conn)

  sleep(1)

but I am sure there is a more streamlined functional solution, if someone knows please comment.

Source https://stackoverflow.com/questions/71798782

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install rethinkdb

For a thirty-second RethinkDB quickstart, check out [rethinkdb.com/docs/quickstart](https://www.rethinkdb.com/docs/quickstart).
[JavaScript](https://rethinkdb.com/docs/guide/javascript/)
[Python](https://rethinkdb.com/docs/guide/python/)
[Ruby](https://rethinkdb.com/docs/guide/ruby/)
[Java](https://rethinkdb.com/docs/guide/java/)
C#/.NET: [RethinkDb.Driver](https://github.com/bchavez/RethinkDb.Driver), [rethinkdb-net](https://github.com/mfenniak/rethinkdb-net)
C++: [librethinkdbxx](https://github.com/AtnNn/librethinkdbxx)
Clojure: [clj-rethinkdb](https://github.com/apa512/clj-rethinkdb)
Elixir: [rethinkdb-elixir](https://github.com/rethinkdb/rethinkdb-elixir)
Go: [GoRethink](https://github.com/dancannon/gorethink)
Haskell: [haskell-rethinkdb](https://github.com/atnnn/haskell-rethinkdb)
PHP: [php-rethink-ql](https://github.com/tbolier/php-rethink-ql)
Rust: [reql](https://github.com/rust-rethinkdb/reql)
Scala: [rethink-scala](https://github.com/kclay/rethink-scala)

Support

RethinkDB was built by a dedicated team, but it wouldn’t have been possible without the support and contributions of hundreds of people from all over the world. We could use your help too! Check out our [contributing guidelines](CONTRIBUTING.md) to get started.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Compare Database Libraries with Highest Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.