triplestore | Nifty library to manage , query and store RDF triples | Parser library
kandi X-RAY | triplestore Summary
kandi X-RAY | triplestore Summary
Nifty library to manage, query and store RDF triples. Make RDF great again!
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- decodeTriple decodes a triple .
- TriplesFromStruct builds a triples from a struct .
- parseTriple parses a triple .
- convert takes a list of files and converts them into tstore .
- encodeBinTriple encodes a triple .
- Traverse siblings of a node
- ObjectLiteral converts an interface to an Object literal .
- encodeNTriple encodes a triple .
- ParseLiteral takes an object and attempts to convert it to an object
- parseLangtag parses a langtag tag .
triplestore Key Features
triplestore Examples and Code Snippets
Community Discussions
Trending Discussions on triplestore
QUESTION
I am currently trying to read the files of an RDF4J triplestore from the universAAL platform and put them into an InfluxDB to merge the data from different smart living systems. However, I have noticed that the individual index files of the Native repository are encrypted/unreadable (See image below). Is there any experience from the community on how to get human readable content out of the RDF4J files (namespace, triples.prop, triples-cosp, triples-posc, triples-spoc, values.hash, values.dat, values.id) and merge them into another database? The documentation of RDF4J did not help me here, so I could not create a decent export.
...ANSWER
Answered 2021-Mar-06 at 13:46The files are not encrypted, they're simply a binary format, optimized for efficient storage and retrieval, used by RDF4J's Native Store database implementation. They're not meant for direct manipulation.
The easiest way to convert them to readable RDF is to spin up a Native Store on top of them and then use the RDF4J API to query/export its data. Assuming you have a complete set of data files it should be as simple as something like this:
QUESTION
I'm trying to load some basic json-ld content as a string, but I'm not able to see the namespace prefixes that should be included.
Given the following json-ld:
...ANSWER
Answered 2020-Dec-18 at 14:24Prefixes are not an inherent part of any RDF graph, they are just conventions and shortcuts so that you don't have to type the full IRI. A specific database software/implementation can have options for configuring namespaces/prefixes, but they are just for presentation.
In this case, JsonLdParser
simply does not import any prefix from the source data into the graph. This is a perfectly valid behaviour, and I don't know if it can be changed. Load
can also take IRdfHandler
which seems to be able to do something with prefixes, but creating an implementation will most likely be more difficult than simply defining the namespace yourself:
QUESTION
When I import the dump "PathwayCommons12.All.BIOPAX.owl.gz" (linked from this page) of this Virtuoso triplestore, I've noticed that there are "#"s inserted after the prefix of various URIs.
In particular, the following query runs on the original endpoint:
...ANSWER
Answered 2020-Dec-05 at 02:24If we look at the first few lines of that massive RDF/XML file, we see:
QUESTION
I'm coming from the RDF world where named graphs are persistent and can be used like a collection of triples. Moreover you can query against one single named graph or over the whole triplestore. I'm looking for the same features (or a workaround to achive them) in Neo4j.
Neo4j's Graph Catalog is well documented. As I understood, named graphs in Neo4j are stored entirely in-memory (so lost after a restart) with a subset of nodes you define for analytic purpose.
Is there a way to create persistents named graphs in Neo4j? A graph that is stored in the disk with the data and that permits to fast access to a subset of nodes (nodes can be added or removed from the named graph).
...ANSWER
Answered 2020-Nov-13 at 19:06You could give every node in the same "named graph" the same label. Since a node can have multiple labels, this does not prevent you from using other labels for other purposes as well.
QUESTION
I am cloning a large public triplestore for local development of a client app.
The data is too large to fit on the ssd partition where /data lives. How can I create a new repository at a different location to host this data?
...ANSWER
Answered 2020-Sep-27 at 16:00GraphDB on startup will read the value of graphdb.home.data
parameter. By default it will point to ${graphdb.home}/data
. You have two options:
Move all repositories to the big non-SSD partition
You need to start graphdb with ./graphdb -Dgraphdb.home.data=/mnt/big-drive/
or edit the value of graphdb.home.data
parameter in ${graphdb.home/conf/graphdb.properties
.
Move a single repository to a different location
GraphDB does not allow creating a repository if the directory already exists. The easiest way to work around this is to create a new empty repository bigRepo
, initialize the repository by making at least a request to it, and then shutdown GraphDB. Then move the directory $gdb.home/data/repositories/bigRepo/storage/
to your new big drive and create a symbolic link on the file system ln -s /mnt/big-drive/ data/repositories/bigRepo/storage/
You can apply the same technique also for moving only individual files.
Please make sure that all permissions are correctly set by using the same user to start GraphDB.
QUESTION
I'm trying to use SPARQL to query literals that have regexes with balanced parentheses. So "( (1) ((2)) (((3))) 4)" should be returned, but "( (1) ((2)) (((3)) 4)", where I removed a closing parenthesis after the "3", should not be returned.
I've previously looked here for a suitable regex: Regular expression to match balanced parentheses
And have been trying to implement regex suggested by rogal111, which is as follows:
...ANSWER
Answered 2020-Jun-23 at 13:58Just to clarify and augment my comment about the use of REPLACE
, the following should work:
QUESTION
I am new to using Blazegraph and am have been developing with it locally as part of a project over the past few months. I am currently trying to host an instance of my triple store online and have got to the point where I am lost going around in circles.
My application uses a spring-boot API to manage any interactions with the triplestore. I originally used docker-compose to host both on my local machine and was able to query and update the triplestore with no problem. This is the docker-compose.yml file I used:
...ANSWER
Answered 2020-Apr-06 at 11:20An easy method is to use an Amazon Elastic Compute Cloud (EC2) instance on AWS.
Simply install docker-compose on the Linux VM, run the docker-compose file from there and then use an Elastic IP address and Cloudflare for a secure HTTPS connection.
If you end up needing better scalability, you can offload the Blazegraph instance to its own VM and move to a container service for the APIs after.
It is also worth noting that Blazegraph is now deprecated and the develops have joined Amazon and became Amazon Neptune.
QUESTION
I have the following RDF data in my Fuseki triplestore.
...ANSWER
Answered 2019-Dec-17 at 01:45What it is needed here is to edit the configuration files (inside the folder /run/configuration/datasetname.ttl), add and restart the Fuseki server.
QUESTION
Despite the number of questions/answers on the subject, I'm stil having trouble to configure Apache Jena Fuseki...
I'm trying to configure an Apache Jena Fuseki instance with TDB and OWL reasoner activated, for testing my application. I need to create a dataset, execute my tests, and delete the dataset programatically.
SetupI use stain/jena-fuseki
docker image to run Apache Jena Fuseki.
I run Jena Fuseki in version 3.10.0.
...ANSWER
Answered 2019-Oct-24 at 14:53The full server provides delete for databases created through the UI or protocol using one of the templates. Arbitrary configuration files pushed the server can't be deleted this way; even if they can be unlinked from the server, there might be stuff left around (they are arbitrary assmbler files) which isn't good for testing.
For testing, there is a simpler way. Spin up a server for each test, either scripted or from Java (JUnit etc).. The "Fuseki main" version of the server starts and stops quite quickly. So start a server with the configuration required - and you can use an in-memory TDB database (location is "--mem--") for the data if the data is reasonably small.
This will complete clearup when the server exits, making the tests cleanly isolated.
QUESTION
I am trying to use Dgraph as my primary database. I have a simple system that has two domain entities viz. User
and Product
. They both have certain properties represented as edges/attributes in Dgraph. They both have a common property name which is a string. If I use the same predicate name
for both the nodes then it creates a problem when I am using a has
function to find all the users with a name
edge. The has function also returns Product
nodes with name
edge. This is not desirable.
In this situation, what is the right approach or recommendation when modeling the domain entities? I can think of two approaches:
- Have a common edge
type
for all the nodes to uniquely identify similar nodes. Here the value oftype
would beUser
orProduct
. This is approximately similar to a traditional table/column analogy wheretype
represents thetable
andedges
as columns with a context localized totype
property. - Have a separate predicate for each node type. So, instead of having
name
, prefer two predicates likeuser_name
andproduct_name
.
I believe this problem only exists for RDF/Triplestore databases like Dgraph and not for property graphs like Neo4j since each node contains its own properties.
...ANSWER
Answered 2019-Sep-23 at 09:05Good news! In Dgraph v1.1, types were introduced.
You may assign a type User
and Product
to your entities and filter at query time by doing:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install triplestore
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page