tmsvm | Automatically exported from code.google.com/p/tmsvm | Machine Learning library

 by   sxhfut Python Version: Current License: No License

kandi X-RAY | tmsvm Summary

kandi X-RAY | tmsvm Summary

tmsvm is a Python library typically used in Artificial Intelligence, Machine Learning applications. tmsvm has no bugs, it has no vulnerabilities and it has low support. However tmsvm build file is not available. You can download it from GitHub.

Automatically exported from code.google.com/p/tmsvm 从code.google.com/p/tmsvm复制。 简介 文本挖掘无论在学术界还是在工业界都有很广泛的应用场景。而文本分类是文本挖掘中一个非常重要的手段与技术。现有的分类技术都已经非常成熟,SVM、KNN、Decision Tree、AN、NB在不同的应用中都展示出较好的效果,前人也在将这些分类算法应用于文本分类中做出许多出色的工作。但在实际的商业应用中,仍然有很多问题没有很好的解决,比如文本分类中的高维性和稀疏性、类别的不平衡、小样本的训练、Unlabeled样本的有效利用、如何选择最佳的训练样本等。这些问题都将导致curve of dimension 、 过拟合等问题。 这个开源系统的目的是集众人智慧,将文本挖掘、文本分类前沿领域效果非常好的算法实现并有效组织,形成一条完整系统将文本挖掘尤其是文本分类的过程自动化。该系统提供了Python和Java两种版本。. 主要特征 该系统在封装 libsvm 、 liblinear 的基础上,又增加了 特征选择 、 LSA特征抽取 、 SVM模型参数选择 、 libsvm格式转化模块 以及一些实用的工具。其主要特征如下:. 封装并完全兼容libsvm、liblinear。 基于Chi的feature selection 见 feature_selection 基于Latent Semantic Analysis 的feature extraction 见 feature_extraction 支持Binary,Tf,log(tf),TfIdf,tfrf,tfchi等多种特征权重 见 feature_weight 文本特征向量的归一化 见 Normalization 利用交叉验证对SVM模型参数自动选择。 见 SVM_model_selection 支持macro-average、micro-average、F-measure、Recall、Precision、Accuracy等多种评价指标 见evaluation_measure 支持多个SVM模型同时进行模型预测 采用python的csc_matrix支持存储大稀疏矩阵。 引入第三方分词工具自动进行分词 将文本直接转化为libsvm、liblinear所支持的格式。 使用该系统可以做什么 对文本自动做SVM模型的训练。包括Libsvm、Liblinear包的选择,分词,词典生成,特征选择,SVM参数的选优,SVM模型的训练等都可以一步完成。 利用生成的模型对未知文本做预测。并返回预测的标签以及该类的隶属度分数。可自动识别libsvm和liblinear的模型。 自动分析预测结果,评判模型效果。计算预测结果的F值、召回率、准确率、Macro,Micro等指标,并会计算特定阈值、以及指定区间所有阈值下的相应指标。 分词。对文本利用mmseg算法对文本进行分词。 特征选择。对文本进行特征选择,选择最具代表性的词。 SVM参数的选择。利用交叉验证方法对SVM模型的参数进行识别,可以指定搜索范围,大于大数据,会自动选择子集做粗粒度的搜索,然后再用全量数据做细粒度的搜索,直到找到最优的参数。对libsvm会选择c,g(gamma),对与liblinear会选择c。 对文本直接生成libsvm、liblinear的输入格式。libsvm、liblinear以及其他诸如weka等数据挖掘软件都要求数据是具有向量格式,使用该系统可以生成这种格式:label index:value SVM模型训练。利用libsvm、liblinear对模型进行训练。 利用LSA对进行Feature Extraction,从而提高分类效果。 开始使用. #将TMSVM系统的路径加入到Python搜索路径中 import sys sys.path.insert(0,yourPath+"\tmsvm\src") import tms. #对预测的结果进行分析,评判模型的效果 tms. tms_analysis(“../tms.result”) 在命令行中调用. #对data文件夹下的binary_seged.train文件进行训练。 $python auto_train.py [options] ../data/binary_seged.train. #利用已经训练好的模型,对对data文件夹下的binary_seged.test文件预测 python predict.py ../data/binary_seged.train ../model/tms.config. #对预测的结果进行分析,评判模型的效果 $python result_anlaysis.py ../tms.result 上面的调用形式都是使用系统中默认的参数,更具体、灵活的参数见程序调用接口. 输入格式 label value1 [value2]. 其中label是定义的类标签,如果是binary classification,建议positive样本为1,negative样本为-1。如果为multi-classification。label可以是任意的整数。 其中value为文本内容。 label 和value以及value1 和value2之间需要用特殊字符进行分割,如”\t” 模型输出 模型结果会放在指定保存路径下的“model”文件夹中,里面有3个文件,默认情况下为dic.key 、 tms.model和tms.config 。 其中dic.key为特征选择后的词典; tms.model为训练好的SVM分类模型; tms.config为模型的配置文件,里面记录了模型训练时使用的参数。 临时文件会放在“temp”文件夹中。里面有两个文件:tms.param和tms.train。 其中tms.param为SVM模型参数选择时所实验的参数。 tms.train是供libsvm和liblinear训练器所使用的输入格式。 源程序说明 src:即该系统的源代码,提供了5个可以在Linux下可以直接调用的程序:auto_train.py、train.py、predict.py为在Linux下通过命令行调用的接口。 tms.py 为在程序中调用的主文件,直接通过import tms 即可调用系统的所有函数。其他文件为程序中实现各个功能的文件。 lsa_src:LSA模型的源程序。 dependence:系统所依赖的一些包。包括libsvm、liblinear、Pymmseg在Linux32位和64位以及windows下的支持包(dll,so文件)。 tools:提供的一些有用的工具,包括result_analysis.py等。 java:java版本的模型预测程序, 项目重要更新日志 2012/09/21 针对linux下的bug进行修正。重新生成win和linux版本的。 2012/03/08 增加stem模块,并修正了几个Bug。 2011/11/22 tmsvm正式发布。 联系方式 邮箱:zhzhl202@163.com.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              tmsvm has a low active ecosystem.
              It has 1 star(s) with 1 fork(s). There are 1 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 2 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of tmsvm is current.

            kandi-Quality Quality

              tmsvm has no bugs reported.

            kandi-Security Security

              tmsvm has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              tmsvm does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              tmsvm releases are not available. You will need to build from source code and install.
              tmsvm has no build file. You will be need to create the build yourself to build the component from source.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of tmsvm
            Get all kandi verified functions for this library.

            tmsvm Key Features

            No Key Features are available at this moment for tmsvm.

            tmsvm Examples and Code Snippets

            No Code Snippets are available at this moment for tmsvm.

            Community Discussions

            QUESTION

            Using RNN Trained Model without pytorch installed
            Asked 2022-Feb-28 at 20:17

            I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

            I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

            I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

            ...

            ANSWER

            Answered 2022-Feb-17 at 10:47

            You should try to export the model using torch.onnx. The page gives you an example that you can start with.

            An alternative is to use TorchScript, but that requires torch libraries.

            Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

            ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

            A running example

            Just modifying a little your example to go over the errors I found

            Notice that via tracing any if/elif/else, for, while will be unrolled

            Source https://stackoverflow.com/questions/71146140

            QUESTION

            Flux.jl : Customizing optimizer
            Asked 2022-Jan-25 at 07:58

            I'm trying to implement a gradient-free optimizer function to train convolutional neural networks with Julia using Flux.jl. The reference paper is this: https://arxiv.org/abs/2005.05955. This paper proposes RSO, a gradient-free optimization algorithm updates single weight at a time on a sampling bases. The pseudocode of this algorithm is depicted in the picture below.

            optimizer_pseudocode

            I'm using MNIST dataset.

            ...

            ANSWER

            Answered 2022-Jan-14 at 23:47

            Based on the paper you shared, it looks like you need to change the weight arrays per each output neuron per each layer. Unfortunately, this means that the implementation of your optimization routine is going to depend on the layer type, since an "output neuron" for a convolution layer is quite different than a fully-connected layer. In other words, just looping over Flux.params(model) is not going to be sufficient, since this is just a set of all the weight arrays in the model and each weight array is treated differently depending on which layer it comes from.

            Fortunately, Julia's multiple dispatch does make this easier to write if you use separate functions instead of a giant loop. I'll summarize the algorithm using the pseudo-code below:

            Source https://stackoverflow.com/questions/70641453

            QUESTION

            How can I check a confusion_matrix after fine-tuning with custom datasets?
            Asked 2021-Nov-24 at 13:26

            This question is the same with How can I check a confusion_matrix after fine-tuning with custom datasets?, on Data Science Stack Exchange.

            Background

            I would like to check a confusion_matrix, including precision, recall, and f1-score like below after fine-tuning with custom datasets.

            Fine tuning process and the task are Sequence Classification with IMDb Reviews on the Fine-tuning with custom datasets tutorial on Hugging face.

            After finishing the fine-tune with Trainer, how can I check a confusion_matrix in this case?

            An image of confusion_matrix, including precision, recall, and f1-score original site: just for example output image

            ...

            ANSWER

            Answered 2021-Nov-24 at 13:26

            What you could do in this situation is to iterate on the validation set(or on the test set for that matter) and manually create a list of y_true and y_pred.

            Source https://stackoverflow.com/questions/68691450

            QUESTION

            CUDA OOM - But the numbers don't add upp?
            Asked 2021-Nov-23 at 06:13

            I am trying to train a model using PyTorch. When beginning model training I get the following error message:

            RuntimeError: CUDA out of memory. Tried to allocate 5.37 GiB (GPU 0; 7.79 GiB total capacity; 742.54 MiB already allocated; 5.13 GiB free; 792.00 MiB reserved in total by PyTorch)

            I am wondering why this error is occurring. From the way I see it, I have 7.79 GiB total capacity. The numbers it is stating (742 MiB + 5.13 GiB + 792 MiB) do not add up to be greater than 7.79 GiB. When I check nvidia-smi I see these processes running

            ...

            ANSWER

            Answered 2021-Nov-23 at 06:13

            This is more of a comment, but worth pointing out.

            The reason in general is indeed what talonmies commented, but you are summing up the numbers incorrectly. Let's see what happens when tensors are moved to GPU (I tried this on my PC with RTX2060 with 5.8G usable GPU memory in total):

            Let's run the following python commands interactively:

            Source https://stackoverflow.com/questions/70074789

            QUESTION

            How to compare baseline and GridSearchCV results fair?
            Asked 2021-Nov-04 at 21:17

            I am a bit confusing with comparing best GridSearchCV model and baseline.
            For example, we have classification problem.
            As a baseline, we'll fit a model with default settings (let it be logistic regression):

            ...

            ANSWER

            Answered 2021-Nov-04 at 21:17

            No, they aren't comparable.

            Your baseline model used X_train to fit the model. Then you're using the fitted model to score the X_train sample. This is like cheating because the model is going to already perform the best since you're evaluating it based on data that it has already seen.

            The grid searched model is at a disadvantage because:

            1. It's working with less data since you have split the X_train sample.
            2. Compound that with the fact that it's getting trained with even less data due to the 5 folds (it's training with only 4/5 of X_val per fold).

            So your score for the grid search is going to be worse than your baseline.

            Now you might ask, "so what's the point of best_model.best_score_? Well, that score is used to compare all the models used when searching for the optimal hyperparameters in your search space, but in no way should be used to compare against a model that was trained outside of the grid search context.

            So how should one go about conducting a fair comparison?

            1. Split your training data for both models.

            Source https://stackoverflow.com/questions/69844028

            QUESTION

            Getting Error 524 while running jupyter lab in google cloud platform
            Asked 2021-Oct-15 at 02:14

            I am not able to access jupyter lab created on google cloud

            I created one notebook using Google AI platform. I was able to start it and work but suddenly it stopped and I am not able to start it now. I tried building and restarting the jupyterlab, but of no use. I have checked my disk usages as well, which is only 12%.

            I tried the diagnostic tool, which gave the following result:

            but didn't fix it.

            Thanks in advance.

            ...

            ANSWER

            Answered 2021-Aug-20 at 14:00

            QUESTION

            TypeError: brain.NeuralNetwork is not a constructor
            Asked 2021-Sep-29 at 22:47

            I am new to Machine Learning.

            Having followed the steps in this simple Maching Learning using the Brain.js library, it beats my understanding why I keep getting the error message below:

            I have double-checked my code multiple times. This is particularly frustrating as this is the very first exercise!

            Kindly point out what I am missing here!

            Find below my code:

            ...

            ANSWER

            Answered 2021-Sep-29 at 22:47

            Turns out its just documented incorrectly.

            In reality the export from brain.js is this:

            Source https://stackoverflow.com/questions/69348213

            QUESTION

            Ordinal Encoding or One-Hot-Encoding
            Asked 2021-Sep-04 at 06:43

            IF we are not sure about the nature of categorical features like whether they are nominal or ordinal, which encoding should we use? Ordinal-Encoding or One-Hot-Encoding? Is there a clearly defined rule on this topic?

            I see a lot of people using Ordinal-Encoding on Categorical Data that doesn't have a Direction. Suppose a frequency table:

            ...

            ANSWER

            Answered 2021-Sep-04 at 06:43

            You're right. Just one thing to consider for choosing OrdinalEncoder or OneHotEncoder is that does the order of data matter?

            Most ML algorithms will assume that two nearby values are more similar than two distant values. This may be fine in some cases e.g., for ordered categories such as:

            • quality = ["bad", "average", "good", "excellent"] or
            • shirt_size = ["large", "medium", "small"]

            but it is obviously not the case for the:

            • color = ["white","orange","black","green"]

            column (except for the cases you need to consider a spectrum, say from white to black. Note that in this case, white category should be encoded as 0 and black should be encoded as the highest number in your categories), or if you have some cases for example, say, categories 0 and 4 may be more similar than categories 0 and 1. To fix this issue, a common solution is to create one binary attribute per category (One-Hot encoding)

            Source https://stackoverflow.com/questions/69052776

            QUESTION

            How to increase dimension-vector size of BERT sentence-transformers embedding
            Asked 2021-Aug-15 at 13:35

            I am using sentence-transformers for semantic search but sometimes it does not understand the contextual meaning and returns wrong result eg. BERT problem with context/semantic search in italian language

            by default the vector side of embedding of the sentence is 78 columns, so how do I increase that dimension so that it can understand the contextual meaning in deep.

            code:

            ...

            ANSWER

            Answered 2021-Aug-10 at 07:39

            Increasing the dimension of a trained model is not possible (without many difficulties and re-training the model). The model you are using was pre-trained with dimension 768, i.e., all weight matrices of the model have a corresponding number of trained parameters. Increasing the dimensionality would mean adding parameters which however need to be learned.

            Also, the dimension of the model does not reflect the amount of semantic or context information in the sentence representation. The choice of the model dimension reflects more a trade-off between model capacity, the amount of training data, and reasonable inference speed.

            If the model that you are using does not provide representation that is semantically rich enough, you might want to search for better models, such as RoBERTa or T5.

            Source https://stackoverflow.com/questions/68686272

            QUESTION

            How to identify what features affect predictions result?
            Asked 2021-Aug-11 at 15:55

            I have a table with features that were used to build some model to predict whether user will buy a new insurance or not. In the same table I have probability of belonging to the class 1 (will buy) and class 0 (will not buy) predicted by this model. I don't know what kind of algorithm was used to build this model. I only have its predicted probabilities.

            Question: how to identify what features affect these prediction results? Do I need to build correlation matrix or conduct any tests?

            Table example:

            ...

            ANSWER

            Answered 2021-Aug-11 at 15:55

            You could build a model like this.

            x = features you have. y = true_lable

            from that you can extract features importance. also, if you want to go the extra mile,you can do Bootstrapping, so that the features importance would be more stable (statistical).

            Source https://stackoverflow.com/questions/68744565

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install tmsvm

            You can download it from GitHub.
            You can use tmsvm like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/sxhfut/tmsvm.git

          • CLI

            gh repo clone sxhfut/tmsvm

          • sshUrl

            git@github.com:sxhfut/tmsvm.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link