irit-stac | IRIT experiments on the STAC corpus
kandi X-RAY | irit-stac Summary
kandi X-RAY | irit-stac Summary
irit-stac is a Python library. irit-stac has no bugs, it has no vulnerabilities, it has build file available and it has low support. You can download it from GitHub.
This is the STAC codebase.
This is the STAC codebase.
Support
Quality
Security
License
Reuse
Support
irit-stac has a low active ecosystem.
It has 8 star(s) with 12 fork(s). There are 4 watchers for this library.
It had no major release in the last 6 months.
There are 2 open issues and 2 have been closed. On average issues are closed in 8 days. There are 3 open pull requests and 0 closed requests.
It has a neutral sentiment in the developer community.
The latest version of irit-stac is current.
Quality
irit-stac has no bugs reported.
Security
irit-stac has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
License
irit-stac does not have a standard license declared.
Check the repository for any license declaration and review the terms closely.
Without a license, all rights are reserved, and you cannot use the library in your applications.
Reuse
irit-stac releases are not available. You will need to build from source code and install.
Build file is available. You can build the component from source.
Installation instructions, examples and code snippets are available.
Top functions reviewed by kandi - BETA
kandi has reviewed irit-stac and discovered the below as its top functions. This is intended to give you an instant insight into irit-stac implemented functionality, and help decide if they suit your requirements.
- Add discussion annotations
- Append a relation element
- Create a relation from tst
- Create a relationship document
- Adds units annotations
- Append a unit
- Append a span
- Annotate EDUs
- Extract features from a corpus
- Augment a game
- Backport portion of the backport
- Remove unseen EDUs from predictions
- Format decoder output
- Run a pipeline
- Add predictions to a corpus
- Generate documentation for a turn
- Decode multiple LAGs
- Process a BEL document
- Transfer the texts from one to another
- Convert a soclog to a sequence of turns
- Split annotated files
- Extract features from a given corpus
- Fix dialog boundaries
- Create an unsegmented file
- Create an argument parser
- Adjusts a split file
- Create an annotation model
Get all kandi verified functions for this library.
irit-stac Key Features
No Key Features are available at this moment for irit-stac.
irit-stac Examples and Code Snippets
No Code Snippets are available at this moment for irit-stac.
Community Discussions
No Community Discussions are available at this moment for irit-stac.Refer to stack overflow page for discussions.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install irit-stac
Both educe and attelo supply requirements.txt files which can be processed by pip.
Linux users: (Debian/Ubuntu) (NB: this step may be obsoleted by requiring conda) sudo apt-get install python-dev libyaml-dev
Fetch the irit-stac code if you have not done so already git clone https://github.com/irit-melodi/irit-stac.git cd irit-stac
Create a sandbox environment. We assume you will be using Anaconda or miniconda. Once you have installed it, you should be able to create the environment with conda env create If that doesn't work, make sure your anaconda version is up to date, and the conda bin directory is in your path (it might be installed in /anaconda instead of $HOME/anaconda).
Switch into your STAC sandbox source activate irit-stac Note that whenever you want to use STAC things, you would need to run this command.
Install the irit-stac code in development mode. This should automatically fetch the educe/attelo dependencies. pip install -r requirements.txt At this point, if somebody tells you to update the STAC code, it should be possible to just git pull and maybe pip install -r requirements.txt again if attelo/educe need to be updated. No further installation will be needed.
Install NLTK data files python setup-nltk.py
Link the STAC corpus in (STAC has not yet been released, so here the directory "Stac" refers to the STAC SVN directory) ln -s /path/to/Stac/data data
You only need to do this if you intend to use the irit-stac parse or irit-stac serve command, i.e. if you're participating in discourse parser experiments or integration work between the parsing pipeline and the dialogue manager.
Do the basic install above.
Download tweet-nlp part of speech tagger and put the jar file (ark-tweet- in the lib/ directory (ie. on the STAC SVN root at the same level as code/ and data/).
Download and install corenlp-server (needs Apache Maven!) cd irit-stac mkdir lib cd lib git clone https://github.com/kowey/corenlp-server cd corenlp-server mvn package
Linux users: (Debian/Ubuntu) (NB: this step may be obsoleted by requiring conda) sudo apt-get install python-dev libyaml-dev
Fetch the irit-stac code if you have not done so already git clone https://github.com/irit-melodi/irit-stac.git cd irit-stac
Create a sandbox environment. We assume you will be using Anaconda or miniconda. Once you have installed it, you should be able to create the environment with conda env create If that doesn't work, make sure your anaconda version is up to date, and the conda bin directory is in your path (it might be installed in /anaconda instead of $HOME/anaconda).
Switch into your STAC sandbox source activate irit-stac Note that whenever you want to use STAC things, you would need to run this command.
Install the irit-stac code in development mode. This should automatically fetch the educe/attelo dependencies. pip install -r requirements.txt At this point, if somebody tells you to update the STAC code, it should be possible to just git pull and maybe pip install -r requirements.txt again if attelo/educe need to be updated. No further installation will be needed.
Install NLTK data files python setup-nltk.py
Link the STAC corpus in (STAC has not yet been released, so here the directory "Stac" refers to the STAC SVN directory) ln -s /path/to/Stac/data data
You only need to do this if you intend to use the irit-stac parse or irit-stac serve command, i.e. if you're participating in discourse parser experiments or integration work between the parsing pipeline and the dialogue manager.
Do the basic install above.
Download tweet-nlp part of speech tagger and put the jar file (ark-tweet- in the lib/ directory (ie. on the STAC SVN root at the same level as code/ and data/).
Download and install corenlp-server (needs Apache Maven!) cd irit-stac mkdir lib cd lib git clone https://github.com/kowey/corenlp-server cd corenlp-server mvn package
Support
For any new features, suggestions and bugs create an issue on GitHub.
If you have any questions check and ask questions on community page Stack Overflow .
Find more information at:
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page