Automate experiences with kandi 1-Click Solution kit
Build your AI-based Virtual Assistant in minutes with this fully editable source code. The entire solution is available as a package to download from the source code repository.
✅ Build an NLP based chatbot/ virtual agent
✅ Provide 24/7 support for an interactive experience
✅ Customize source code for your requirements
Download the installer; follow Kit Deployment instructions to install this in minutes and customize it as per your requirements.
Kit Deployment Instructions
⬇️ Download this 1-Click kit_installer file to get started. After download, extract this zip, run the file and follow the next steps below.
Note: Do ensure to extract the zip file before running it.
Follow below instructions to run the solution.
1. After successful installation of the kit, press 'Y' to run the kit and execute cells in the notebook.
2. To run the kit manually, press 'N' and locate the zip file 'faq-virtual-agent.zip'
3. Extract the zip file and navigate to the directory 'faq-virtual-agent'
4. Open command prompt in the extracted directory 'faq-virtual-agent' and run the command 'jupyter notebook'
5. Locate and open the 'Virtual Agent for FAQ.ipynb' notebook from the Jupyter Notebook browser window.
6. Execute cells in the notebook
Kit Solution Source
FAQ Virtual Agents created using this kit are added in this section. The entire solution is available as a package to download from the source code repository.
VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.
Jupyter Notebook is used for our development.
For extensive analysis and exploration of data, and to deal with arrays, these libraries are used. They are also used for performing scientific computation and data manipulation.
For building Virtual Agent, we use pandas to load, view and analyse data from csv file; and we use numpy to find out the indices of maximum values of an array.
Libraries in this group are used for analysis and processing of unstructured natural language. The data, as in its original form aren't used as it has to go through processing pipeline to become suitable for applying machine learning techniques and algorithms.
We use py-lingualytics to manage pre-processing of data like removing numerical values, stopwords, punctuations and so on.
Machine learning libraries and frameworks here are helpful in capturing state-of-the-art embeddings. Embeddings are vectoral representation of text with their semantics.
For generating sentence embedding, we use sentence-transformers with pretrained models. We reference bert-cosine-sim to build similarity engine for comparing two sets of input text.