dendrite | querying large datasets on a single host

 by   jwhitbeck Java Updated: 10 months ago - v0.5.13 License: Proprietary

Download this library from

Build Applications

kandi X-RAY | dendrite REVIEW AND RATINGS

Dendrite is a library for querying large datasets on a single host at near-interactive speeds.

kandi-support
Support

  • dendrite has a low active ecosystem.
  • It has 67 star(s) with 0 fork(s).
  • It had no major release in the last 12 months.
  • On average issues are closed in 6 days.
  • It has a neutral sentiment in the developer community.

quality kandi
Quality

  • dendrite has no issues reported.

security
Security

  • dendrite has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

license
License

  • dendrite has a Proprietary License.
  • Proprietary licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

build
Reuse

  • dendrite releases are available to install and integrate.
  • dendrite has no build file. You will be need to create the build yourself to build the component from source.
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

dendrite Key Features

simple: there is no configuration, no services to run, and reads are as simple as opening a file;

fast: there are few bottlenecks and reads will usually make good use of all available CPU cores;

compact: the file size is typically 30-40% lower than the equivalent compressed JSON;

flexible: it supports the same rich set of nested data structures as EDN;

write once, read often: optimizations are run at write-time to ensure fast read-time performance.

dendrite examples and code snippets

  • Restoring mask to image
  • How to update all the values in a BTreeSet?
  • Count protuberances in dendrite with openCV (python)
  • How can I retrieve <oboInOwl:hasExactSynonym> from every <rdf:Description> using OWL API

Restoring mask to image

output = net(input)

binary_mask = torch.argmax(output, dim=0).cpu().numpy()
ax.set_title('Neuron Class')
ax.imshow(binary_mask == 0)
import torch

tensor = torch.randint(high=3, size=(512, 512))

red = tensor == 0
white = tensor == 2

zero_channel = red & white

image = torch.stack([zero_channel, white, white]).int().numpy() * 255
Image.fromarray((image).astype(np.uint8))
-----------------------
output = net(input)

binary_mask = torch.argmax(output, dim=0).cpu().numpy()
ax.set_title('Neuron Class')
ax.imshow(binary_mask == 0)
import torch

tensor = torch.randint(high=3, size=(512, 512))

red = tensor == 0
white = tensor == 2

zero_channel = red & white

image = torch.stack([zero_channel, white, white]).int().numpy() * 255
Image.fromarray((image).astype(np.uint8))

How to update all the values in a BTreeSet?

let set = std::mem::replace(&mut data_in_some_strct, BTreeSet::new());

data_in_some_strct = set.into_iter()
    .map(|mut p| p.offset(&Pos::of(1,0)))
    .inspect(|p| println!("{:?}", *p))
    .collect();

Count protuberances in dendrite with openCV (python)

angle1 = arctan((by-ay)/(bx-ax))
angle2 = arctan((cy-by)/(cx-bx))
angleDiff=angle2-angle1
if(angleDiff<-PI) angleDiff=angleDiff+2PI

if(angleDiff>0) concave
Else convex

How can I retrieve <oboInOwl:hasExactSynonym> from every <rdf:Description> using OWL API

    OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
    OWLOntology pizzaOntology = manager.loadOntologyFromOntologyDocument(f);
    Set<OWLOntology> allOntologies = manager.getImportsClosure(pizzaOntology);
    // System.out.println(allOntologies);
    OWLReasonerFactory reasonerFactory = new ElkReasonerFactory();
    OWLReasoner reasoner = reasonerFactory.createReasoner(pizzaOntology);
    // pizzaOntology
    OWLDataFactory factory = manager.getOWLDataFactory();
    Map<OWLAnnotationSubject, String> allLabels = new HashMap<>();
    Map<OWLAnnotationSubject, String> allExactSynonyms = new HashMap<>();
    for (OWLAnnotationAssertionAxiom ax : pizzaOntology
        .getAxioms(AxiomType.ANNOTATION_ASSERTION)) {
        // Check if the axiom is a label and write to file
        OWLAnnotationSubject subject = ax.getSubject();
        if (ax.getProperty().toString().contains("hasExactSynonym")) {
            allExactSynonyms.put(subject, ax.getValue().toString());
        }
        if (ax.getProperty().equals(factory.getRDFSLabel())) {
            String label = ax.getValue().toString();
            label = label.toLowerCase();
            label = label.replaceAll("[^\\p{L}\\p{Nd}]+", " ");
            allLabels.put(subject, label);
        }
    }

COMMUNITY DISCUSSIONS

Top Trending Discussions on dendrite
  • Restoring mask to image
  • How to update all the values in a BTreeSet?
  • Count protuberances in dendrite with openCV (python)
  • How can I retrieve <oboInOwl:hasExactSynonym> from every <rdf:Description> using OWL API
Top Trending Discussions on dendrite

QUESTION

Restoring mask to image

Asked 2020-Oct-08 at 16:32

My PyTorch model outputs a segmented image with values (0,1,2) for each one of the three classes. During the preparation of the set, I mapped black to 0, red to 1 and white to 2. I have two questions:

  1. How can I show what each class represents? for example take a look at the image: enter image description here I am currently using the following method to show each class:

         output = net(input)
    
         input = input.cpu().squeeze()
         input = transforms.ToPILImage()(input)
    
         probs = F.softmax(output, dim=1)
         probs = probs.squeeze(0)
    
         full_mask = probs.squeeze().cpu().numpy()
    
         fig, (ax0, ax1, ax2, ax3, ax4) = plt.subplots(1, 5, figsize=(20,10), sharey=True)
    
         ax0.set_title('Input Image')
         ax1.set_title('Background Class')
         ax2.set_title('Neuron Class')
         ax3.set_title('Dendrite Class')
         ax4.set_title('Predicted Mask')
    
         ax0.imshow(input)
         ax1.imshow(full_mask[0, :, :].squeeze())
         ax2.imshow(full_mask[1, :, :].squeeze())
         ax3.imshow(full_mask[2, :, :].squeeze())
    
         full_mask = np.argmax(full_mask, 0)
         img = mask_to_image(full_mask)
    

But there appears to be shared pixels between the classes, is there a better way to show this (I want the first image to only of the background class, the the second only of the neuron class and the third only of the dendrite class)?

2.My second question is about generating a black, red and white image from the mask, currently the mask is of shape (512,512) and has the following values:

[[0 0 0 ... 0 0 0]
 [0 0 0 ... 2 0 0]
 [0 0 0 ... 2 2 0]
 ...
 [2 1 2 ... 2 2 2]
 [2 1 2 ... 2 2 2]
 [0 2 0 ... 2 2 2]]

And the results look like this: enter image description here

Since I am using this code to convert to image:

def mask_to_image(mask):
   return Image.fromarray((mask).astype(np.uint8))

ANSWER

Answered 2020-Oct-08 at 16:32

But there appears to be shared pixels between the classes, is there a better way to show this (I want the first image to only of the background class, the the second only of the neuron class and the third only of the dendrite class)?

Yes, you can take argmax along 0th dimension so the one with highest logit (unnormalized probability) will be 1, rest will be zero:

output = net(input)

binary_mask = torch.argmax(output, dim=0).cpu().numpy()
ax.set_title('Neuron Class')
ax.imshow(binary_mask == 0)

My second question is about generating a black, red and white image from the mask, currently the mask is of shape (512,512) and has the following values

You can spread [0, 1, 2] values into the zero-th axis making it channel-wise. Now [0, 0, 0] values across all channels for single pixel will be black, [255, 255, 255] would be white and [255, 0, 0] would be red (as PIL is RGB format):

import torch

tensor = torch.randint(high=3, size=(512, 512))

red = tensor == 0
white = tensor == 2

zero_channel = red & white

image = torch.stack([zero_channel, white, white]).int().numpy() * 255
Image.fromarray((image).astype(np.uint8))

Source https://stackoverflow.com/questions/64264678

QUESTION

How to update all the values in a BTreeSet?

Asked 2019-Feb-14 at 20:15

I have collection which is a field in a struct in some module. I want to update all the values in the collection from another module.

I wrote some code to mimic what I want to achieve. It's shortened a bit, but I think it has all needed parts. There is no struct holding the collection in this code, but imagine this is a getter which returns the collection. I added in comments how I think it should look.

pub mod pos {
    use std::cmp::{Ordering, PartialEq};

    #[derive(PartialOrd, PartialEq, Eq, Hash, Debug, Copy, Clone)]
    pub struct Pos {
        pub x: i32,
        pub y: i32,
    }

    #[allow(dead_code)]
    impl Pos {
        pub fn of(x: i32, y: i32) -> Self {
            Self { x, y }
        }

        pub fn offset(&mut self, pos: &Self) -> Self {
            self.x += pos.x;
            self.y += pos.y;

            *self
        }
    }

    impl Ord for Pos {
        fn cmp(&self, other: &Self) -> Ordering {
            if self.x < other.x {
                Ordering::Less
            } else if self.eq(other) {
                Ordering::Equal
            } else {
                Ordering::Greater
            }
        }
    }
}

mod test {
    use crate::pos::Pos;
    use std::collections::BTreeSet;

    #[test]
    fn test_iterators() {
        let mut data_in_some_strct: BTreeSet<Pos> = BTreeSet::new();

        data_in_some_strct.insert(Pos::of(1, 1));
        data_in_some_strct.insert(Pos::of(2, 2));
        data_in_some_strct.insert(Pos::of(3, 3));
        data_in_some_strct.insert(Pos::of(4, 4));

        // mimic getter call ( get_data(&mut self) -> &BTreeSet<Pos> {...}
        //    let set = data_in_some_strct;   // works, but not a reference
        let set = &data_in_some_strct; // doesn't work, How to adjust code to make it work??

        data_in_some_strct = set
            .into_iter()
            .map(|mut p| p.offset(&Pos::of(1, 0)))
            .inspect(|p| println!("{:?}", *p))
            .collect();

        assert_eq!(data_in_some_strct.contains(&Pos::of(2, 1)), true);
        assert_eq!(data_in_some_strct.contains(&Pos::of(3, 2)), true);
        assert_eq!(data_in_some_strct.contains(&Pos::of(4, 3)), true);
        assert_eq!(data_in_some_strct.contains(&Pos::of(5, 4)), true);
    }
}

Playground

error[E0596]: cannot borrow `*p` as mutable, as it is behind a `&` reference
  --> src/lib.rs:56:26
   |
56 |             .map(|mut p| p.offset(&Pos::of(1, 0)))
   |                       -  ^ `p` is a `&` reference, so the data it refers to cannot be borrowed as mutable
   |                       |
   |                       help: consider changing this to be a mutable reference: `&mut pos::Pos`

I managed to make it work without borrowing, but I would like to make it work with borrowing. I guess there is more then one way to achieve it. Comments to help my Rust brain dendrites connect are welcome.

ANSWER

Answered 2019-Feb-14 at 19:53

BTreeSet doesn't implement impl<'a, T> IntoIterator for &'a mut BTreeSet<T> (that would break the tree).

You can only do this with types that implement IntoIterator with mut like impl<'a, T> IntoIterator for &'a mut Vec<T>, example.

Source https://stackoverflow.com/questions/54697274

QUESTION

Count protuberances in dendrite with openCV (python)

Asked 2018-Jan-31 at 06:38

I'm trying to count dendritic spines (the tiny protuberances) in mouse dendrites obtained by fluorescent microscopy, using Python and OpenCV.

Here is the original image, from which I'm starting:

Raw picture:

enter image description here

After some preprocessing (code below) I've obtained these contours:

Raw picture with contours (White):

enter image description here

What I need to do is to recognize all protuberances, obtaining something like this:

Raw picture with contours in White and expected counts in red:

enter image description here

What I intended to do, after preprocessing the image (binarizing, thresholding and reducing its noise), was drawing the contours and try to find convex defects in them. The problem arose as some of the "spines" (the technical name of those protuberances) are not recognized as they en up bulged together in the same convexity defect, underestimating the result. Is there any way to be more "precise" when marking convexity defects?

Raw image with contour marked in White. Red dots mark spines that were identified with my code. Green dots mark spines I still can't recognize:

enter image description here

My Python code:

import cv2
import numpy as np
from matplotlib import pyplot as plt

#Image loading and preprocessing:

img = cv2.imread('Prueba.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pyrMeanShiftFiltering(img,5,11)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

ret,thresh1 = cv2.threshold(gray,5,255,0)

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
img1 = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)
img1 = cv2.morphologyEx(img1, cv2.MORPH_OPEN, kernel)
img1 = cv2.dilate(img1,kernel,iterations = 5)

#Drawing of contours. Some spines were dettached of the main shaft due to
#image bad quality. The main idea of the code below is to identify the shaft
#as the biggest contour, and count any smaller as a spine too.

_, contours,_ = cv2.findContours(img1,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print("Number of contours detected: "+str(len(contours)))

cv2.drawContours(img,contours,-1,(255,255,255),6)
plt.imshow(img)
plt.show()

lengths = [len(i) for i in contours]
cnt = lengths.index(max(lengths))

#The contour of the main shaft is stored in cnt    

cnt = contours.pop(cnt)

#Finding convexity points with hull:

hull = cv2.convexHull(cnt) 

#The next lines are just for visualization. All centroids of smaller contours
#are marked as spines.

for i in contours:

    M = cv2.moments(i)
    centroid_x = int(M['m10']/M['m00'])
    centroid_y = int(M['m01']/M['m00'])
    centroid = np.array([[[centroid_x, centroid_y]]])

    print(centroid)

    cv2.drawContours(img,centroid,-1,(0,255,0),25)
    cv2.drawContours(img,centroid,-1,(255,0,0),10)

cv2.drawContours(img,hull,-1,(0,255,0),25)
cv2.drawContours(img,hull,-1,(255,0,0),10)

plt.imshow(img)
plt.show()

#Finally, the number of spines is computed as the sum between smaller contours
#and protuberances in the main shaft.

spines = len(contours)+len(hull)
print("Number of identified spines: " + str(spines))

I know my code has many smaller problems to solve yet, but I think the biggest one is the one presented here.

Thanks for your help! and have a good one

ANSWER

Answered 2018-Jan-31 at 06:38

I would approximate the contour to a polygon as Silencer suggests (don't use the convex hull). Maybe you should simplify the contour just a little bit to keep most of the detail of the shape.

This way, you will have many vertices that you have to filter: looking at the angle of each vertex you can tell if it is concave or convex. Each spine is one or more convex vertices between concave vertices (if you have several consecutive convex vertices, you keep only the sharper one).

EDIT: in order to compute the angle you can do the following: let's say that a, b and c are three consecutive vertices

angle1 = arctan((by-ay)/(bx-ax))
angle2 = arctan((cy-by)/(cx-bx))
angleDiff=angle2-angle1
if(angleDiff<-PI) angleDiff=angleDiff+2PI

if(angleDiff>0) concave
Else convex

Or vice versa, depending if your contour is clockwise or counterclockwise, black or white. If you sum all angleDiff of any polygon, the result should be 2PI. If it is -2PI, then the last "if" should be swapped.

Source https://stackoverflow.com/questions/48411742

QUESTION

How can I retrieve <oboInOwl:hasExactSynonym> from every <rdf:Description> using OWL API

Asked 2017-Apr-23 at 10:18

I am new to OWL API hence I am facing some issues for retrieving data.

Suppose I have the following data:

<rdf:Description rdf:about="http://purl.obolibrary.org/obo/GO_0044297">
        <oboInOwl:creation_date>"2010-02-05T10:37:16Z"</oboInOwl:creation_date>
        <obo:IAO_0000115>"The portion of a cell bearing surface projections such as axons, dendrites, cilia, or flagella that includes the nucleus, but excludes all cell projections."</obo:IAO_0000115>
        <oboInOwl:hasOBONamespace>"cellular_component"</oboInOwl:hasOBONamespace>
        <oboInOwl:hasDbXref>"Wikipedia:Cell_body"</oboInOwl:hasDbXref>
        <oboInOwl:hasDbXref>"FMA:67301"</oboInOwl:hasDbXref>
        <oboInOwl:hasExactSynonym>"cell soma"</oboInOwl:hasExactSynonym>
        <rdfs:label>"cell body"</rdfs:label>
        <rdfs:subClassOf>http://purl.obolibrary.org/obo/GO_0044464</rdfs:subClassOf>
        <oboInOwl:hasDbXref>"FBbt:00005107"</oboInOwl:hasDbXref>
        <rdf:type>http://www.w3.org/2002/07/owl#Class</rdf:type>
        <oboInOwl:id>"GO:0044297"</oboInOwl:id>
        <rdfs:comment>"Note that 'cell body' and 'cell soma' are not used in the literature for cells that lack projections, nor for some cells (e.g. yeast with mating projections) that do have projections."</rdfs:comment>
        <oboInOwl:created_by>"xyz"</oboInOwl:created_by>
        <oboInOwl:inSubset>http://purl.obolibrary.org/obo/go#goslim_pir</oboInOwl:inSubset>
    </rdf:Description>
<rdf:Description rdf:about="http://purl.obolibrary.org/obo/GO_0071509">
    <oboInOwl:hasRelatedSynonym>"activation of MAPKK activity involved in mating response"</oboInOwl:hasRelatedSynonym>
    <rdfs:subClassOf>http://purl.obolibrary.org/obo/GO_0090028</rdfs:subClassOf>
    <oboInOwl:hasOBONamespace>"biological_process"</oboInOwl:hasOBONamespace>
    <oboInOwl:hasExactSynonym>"activation of MAP kinase kinase activity during conjugation with cellular fusion"</oboInOwl:hasExactSynonym>
    <oboInOwl:hasExactSynonym>"conjugation with cellular fusion, activation of MAPKK activity"</oboInOwl:hasExactSynonym>
    <rdfs:label>"activation of MAPKK activity involved in conjugation with cellular fusion"</rdfs:label>
    <rdf:type>http://www.w3.org/2002/07/owl#Class</rdf:type>
    <oboInOwl:id>"GO:0071509"</oboInOwl:id>
    <oboInOwl:creation_date>"2010-01-05T02:09:58Z"</oboInOwl:creation_date>
    <oboInOwl:hasExactSynonym>"conjugation with cellular fusion, activation of MAP kinase kinase activity"</oboInOwl:hasExactSynonym>
    <oboInOwl:created_by>"midori"</oboInOwl:created_by>
    <obo:IAO_0000115>"Any process that initiates the activity of the inactive enzyme MAP kinase kinase in the context of conjugation with cellular fusion."</obo:IAO_0000115>
    <rdfs:subClassOf>http://purl.obolibrary.org/obo/GO_0000186</rdfs:subClassOf>
</rdf:Description>

For each of the "rdf:description" I want to retrieve its corresponding "rdf:label", "oboInOwl:hasExactSynonym" and "rdfs:subClassOf" using OWL API in java.

So far I can get all labels but not the linkages as to which label is for which description.

Currently my code looks like:

OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
        OWLOntology pizzaOntology = manager.loadOntologyFromOntologyDocument(f);
        Set<OWLOntology> allOntologies = manager.getImportsClosure(pizzaOntology);
        //System.out.println(allOntologies);
        OWLReasonerFactory reasonerFactory = new ElkReasonerFactory();
        OWLReasoner reasoner = reasonerFactory.createReasoner(pizzaOntology);
        //pizzaOntology
        OWLDataFactory factory = manager.getOWLDataFactory();

        Set<OWLAxiom> axiom = pizzaOntology.getAxioms();
        for (OWLAxiom o : axiom) {
            AxiomType<?> at = o.getAxiomType();
            //System.out.println("Annotation type is "+at+" for "+o);

            if (at == AxiomType.ANNOTATION_ASSERTION) {
                OWLAnnotationAssertionAxiom ax = (OWLAnnotationAssertionAxiom) o;
                //Check if the axiom is a label and write to file
                if(ax.getProperty().toString().contains("hasExactSynonym"))
                System.out.println("Data is "+ax.getValue().toString());
                if (ax.getProperty().equals(factory.getRDFSLabel())) {
                    String label = ax.getValue().toString();
                    label = label.toLowerCase();
                    label = label.replaceAll("[^\\p{L}\\p{Nd}]+", " ");
                    allLabels.add(label);
                }
            }

        }

Can someone help me with some ideas about this?

ANSWER

Answered 2017-Apr-23 at 10:18

This should help:

    OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
    OWLOntology pizzaOntology = manager.loadOntologyFromOntologyDocument(f);
    Set<OWLOntology> allOntologies = manager.getImportsClosure(pizzaOntology);
    // System.out.println(allOntologies);
    OWLReasonerFactory reasonerFactory = new ElkReasonerFactory();
    OWLReasoner reasoner = reasonerFactory.createReasoner(pizzaOntology);
    // pizzaOntology
    OWLDataFactory factory = manager.getOWLDataFactory();
    Map<OWLAnnotationSubject, String> allLabels = new HashMap<>();
    Map<OWLAnnotationSubject, String> allExactSynonyms = new HashMap<>();
    for (OWLAnnotationAssertionAxiom ax : pizzaOntology
        .getAxioms(AxiomType.ANNOTATION_ASSERTION)) {
        // Check if the axiom is a label and write to file
        OWLAnnotationSubject subject = ax.getSubject();
        if (ax.getProperty().toString().contains("hasExactSynonym")) {
            allExactSynonyms.put(subject, ax.getValue().toString());
        }
        if (ax.getProperty().equals(factory.getRDFSLabel())) {
            String label = ax.getValue().toString();
            label = label.toLowerCase();
            label = label.replaceAll("[^\\p{L}\\p{Nd}]+", " ");
            allLabels.put(subject, label);
        }
    }

The two maps hold the relation between an IRI (the annotation subject in this case is always an IRI - annotation are attached to IRIs of classes, not to classes themselves) and the value of a property. If you match the label value with the value of a property for an IRI, you can find the other value through the IRI in the map.

Source https://stackoverflow.com/questions/43306382

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

VULNERABILITIES

No vulnerabilities reported

INSTALL dendrite

You can use dendrite like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the dendrite component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

SUPPORT

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

Implement dendrite faster with kandi.

  • Use the support, quality, security, license, reuse scores and reviewed functions to confirm the fit for your project.
  • Use the, Q & A, Installation and Support guides to implement faster.

Discover Millions of Libraries and
Pre-built Use Cases on kandi