lmao | Language Model Accessor Object - Neural autocomplete
kandi X-RAY | lmao Summary
kandi X-RAY | lmao Summary
lmao is a CSS library. lmao has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.
What? A chrome extension that adds a neural auto-complete function to LaTeX documents in any Overleaf project. Concretely, it is a GPT2 language model server which hooks into Overleaf's editor to add 'gmail smart compose' or 'write with transformer'-like functionlity, but for LaTeX and it works seamlessly on top of Overleaf's editor :). STATUS UPDATE Feb 2021: on haitus. Please get in contact if you would like to help test out the extension or take the project further. Due to time and other constraints, I haven't been able to continue development of this project. Following the rest of this readme might not work exactly as some libraries have been updated and API calls have changed, so you might need to fiddle around to get things to work. Nevertheless, it is still quite fun once one gets it working, and if anyone wants to continue work on it feel free to fork the repo or just see how I've done things.
What? A chrome extension that adds a neural auto-complete function to LaTeX documents in any Overleaf project. Concretely, it is a GPT2 language model server which hooks into Overleaf's editor to add 'gmail smart compose' or 'write with transformer'-like functionlity, but for LaTeX and it works seamlessly on top of Overleaf's editor :). STATUS UPDATE Feb 2021: on haitus. Please get in contact if you would like to help test out the extension or take the project further. Due to time and other constraints, I haven't been able to continue development of this project. Following the rest of this readme might not work exactly as some libraries have been updated and API calls have changed, so you might need to fiddle around to get things to work. Nevertheless, it is still quite fun once one gets it working, and if anyone wants to continue work on it feel free to fork the repo or just see how I've done things.
Support
Quality
Security
License
Reuse
Support
lmao has a low active ecosystem.
It has 4 star(s) with 1 fork(s). There are 1 watchers for this library.
It had no major release in the last 12 months.
lmao has no issues reported. There are no pull requests.
It has a neutral sentiment in the developer community.
The latest version of lmao is 0.2.0
Quality
lmao has no bugs reported.
Security
lmao has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
License
lmao does not have a standard license declared.
Check the repository for any license declaration and review the terms closely.
Without a license, all rights are reserved, and you cannot use the library in your applications.
Reuse
lmao releases are available to install and integrate.
Installation instructions are available. Examples and code snippets are not available.
Top functions reviewed by kandi - BETA
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of lmao
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of lmao
lmao Key Features
No Key Features are available at this moment for lmao.
lmao Examples and Code Snippets
No Code Snippets are available at this moment for lmao.
Community Discussions
No Community Discussions are available at this moment for lmao.Refer to stack overflow page for discussions.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install lmao
Simply go to the Google chrome web store and navigate to this extension and hit 'install'. Then next time you go to an overleaf project, click the icon and it should be pretty obvious what to do :).
If you have a reasonable Nvidia GPU and have python installed, then you can host the server locally with django! So, in addition to getting the chrome extension, you need to:.
Clone this repo
Create a new python environment (3.6 or newer)
Open a new terminal with the new python environment activated, cd into lmao/ folder.
Install pip requirements (pip install -r requirements.txt). You might need to run a separate command to install pytorch correctly on your system. See Pytorch's installation guide for more info.
Create a folder in this directory called models/. Download the GPT2 model/config zip file gpt2-small.zip under the 'Releases' tab and extract it into the models/ folder. So now there should be a lmao/models/gpt2-small folder.
Now run python manage.py runserver. It should start up the server and just idle in the background. Leave this terminal running while you are using the extension on Overleaf. If you want to run the local server again, just start up the terminal and activate your python environment, and run python manage.py runserver again. PS: you might need to run python manage.py migrate the first time you try run the server, depending on the Django version.
Don't worry, only I need to do this. Currently just trying to gather enough funds for a persistent GPU inference server.
If you have a reasonable Nvidia GPU and have python installed, then you can host the server locally with django! So, in addition to getting the chrome extension, you need to:.
Clone this repo
Create a new python environment (3.6 or newer)
Open a new terminal with the new python environment activated, cd into lmao/ folder.
Install pip requirements (pip install -r requirements.txt). You might need to run a separate command to install pytorch correctly on your system. See Pytorch's installation guide for more info.
Create a folder in this directory called models/. Download the GPT2 model/config zip file gpt2-small.zip under the 'Releases' tab and extract it into the models/ folder. So now there should be a lmao/models/gpt2-small folder.
Now run python manage.py runserver. It should start up the server and just idle in the background. Leave this terminal running while you are using the extension on Overleaf. If you want to run the local server again, just start up the terminal and activate your python environment, and run python manage.py runserver again. PS: you might need to run python manage.py migrate the first time you try run the server, depending on the Django version.
Don't worry, only I need to do this. Currently just trying to gather enough funds for a persistent GPU inference server.
Support
~~Does not work so well if large portions of the document are commented out.~~ Fixed.
Find more information at:
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page