ckanext-dcat | Open Data initiatives around the world , the need to share
kandi X-RAY | ckanext-dcat Summary
kandi X-RAY | ckanext-dcat Summary
CKAN ♥ DCAT
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create an DCAT graph from a dataset
- Adds a date triple triple
- Safely quote characters
- Create a new URIRef instance
- Parse DCAT dataset
- Return access rights statement
- Return the object value for the given subject and predicate
- Return first object matching subject and predicate
- Import a stage
- Return the dataset with the given guid
- Dcatalog search for datasets
- Read the catalog page
- Read dataset page
- Create a graph from a catalog
- Serialize a dataset
- Add mailto
- Show the CDAN catalog
- Run the command
- Display a dataset
- Return a list of CDAN datasets
- Parse a dataset
- Gather a list of objects from the DCATR API
- Gathers the content of the given harvest job
- Adds DCATAP2
- Generates a graph from a dataset
- Helper method to get a value from a resource
ckanext-dcat Key Features
ckanext-dcat Examples and Code Snippets
Community Discussions
Trending Discussions on ckanext-dcat
QUESTION
I've been trying to figure out how to mount a SPARQL endpoint for a couple of days, but as much as I read I can not understand it.
Comment my intention: I have an open data server mounted on CKAN and my goal is to be able to use SPARQL queries on the data. I know I could not do it directly on the datasets themselves, and I would have to define my own OWL and convert the data I want to use from CSV format (which is the format they are currently in) to RDF triple format (to be used as linked data).
The idea was to first test with the metadata of the repositories that can be generated automatically with the extension ckanext-dcat, but is that I really do not find where to start. I've searched for information on how to install a Virtuoso server for the SPARQL, but the information I've found leaves a lot to be desired, not to say that I can find nowhere to explain how I could actually introduce my own OWLs and RDFs into Virtuoso itself.
Someone who can lend me a hand to know how to start? Thank you
...ANSWER
Answered 2017-Jun-19 at 14:38I'm a little confused. Maybe this is two or more questions?
1. How to convert tabular data, like CSV, into the RDF semantic format?
This can be done with an R2RML approach. Karma is a great GUI for that purpose. Like you say, a conversion like that can really be improved with an underlying OWL ontology. But it can be done without creating a custom ontology, too.
I have elaborated on this in the answer to another question.
2. Now that I have some RDF formatted data, how can I expose it with a SPARQL endpoint?
Virtuoso is a reasonable choice. There are multiple ways to deploy it and multiple ways to load the data, and therefore LOTs of tutorial on the subject. Here's one good one, from DBpedia.
If you'd like a simpler path to starting an RDF triplestore with a SPARQL endpoint, Stardog and Blazegraph are available as JARs, and RDF4J can easily be deployed within a container like Tomcat.
All provide web-based graphical interfaces for loading data and running queries, in addition to SPARQL REST endpoints. At least Stardog also provides command-line tools for bulk loading.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ckanext-dcat
Install ckanext-harvest (https://github.com/ckan/ckanext-harvest#installation) (Only if you want to use the RDF harvester)
Install the extension on your virtualenv: (pyenv) $ pip install -e git+https://github.com/ckan/ckanext-dcat.git#egg=ckanext-dcat
Install the extension requirements: (pyenv) $ pip install -r ckanext-dcat/requirements.txt
Enable the required plugins in your ini file: ckan.plugins = dcat dcat_rdf_harvester dcat_json_harvester dcat_json_interface structured_data
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page