JXQuant | 该库主要分享 “ 匠芯量化 ” 公众号内的策略源码,更多策略细节请关注微信公众号: “ 匠芯量化 ” (微信搜索公众号 | SQL Database library
kandi X-RAY | JXQuant Summary
kandi X-RAY | JXQuant Summary
Init_StockALL_Sp.py —— 【数据采集】利用tushare接口将日线行情存储到本地数据库。 DC.py —— 【数据预处理】将本地存储的日基础行情整合成一份训练集。 SVM.py —— 【SVM建模】对个股用SVM进行建模,训练和预测。 Model_Evaluate.py —— 【模型评估】通过回测+推进式建模的方式对模型进行评估,主要计算查准率Precision,查全率Recall,F1分值,并存入结果表。 Portfolio.py —— 【仓位管理】基于马科维茨投资组合理论,计算一段时间序列内投资组合的风险、仓位配比和夏普率,有市场方向和最佳收益方向两种结果。 Deal.py.py —— 【模拟交易】封装类,用于模拟交易过程中获取最新的资产账户相关数据。 Operator.py —— 【模拟交易】封装函数,用于模拟交易过程中执行买和卖操作。 Cap_Update_daily.py —— 【模拟交易】封装函数,用于在回测过程中,每日更新资产表中相关数据。 Filter.py —— 【策略回测】封装函数,用于在回测过程中,处理一些简单的逻辑(更新持仓天数,买卖顺序等)。 main.py —— 【策略回测】策略的框架,回测的主函数。 stock_my_capital.sql —— 【策略回测】回测主函数里用到的数据库表,可直接导入。资产账单表,表结构在文章内有介绍,该表内含一条初始数据,用于定义初始资金,可根据回测场景自行修改。 stock_stock_index.sql —— 【策略回测】回测主函数里用到的数据库表,可直接导入。大盘行情表,内含部分大盘行情。 stock_model_ev_mid.sql —— 【模型评估】模型评估过程中用到的中间表,用于暂存回测时间序列内的部分数据,并用于最终计算F1分值。 stock_model_ev_resu.sql —— 【模型评估】模型评估的结果表,用于存储股票在某个时间点上的F1分值。 stock_my_stock_pool.sql —— 【策略回测】当前股票资产详情表,主要字段为:持仓股票代码,成交均价,持仓量,持仓天数。 stock_stock_all.sql —— 【策略回测】股票每日行情数据表,包含所有股票的日线行情。 stock_stock_info.sql —— 【策略回测】表的瘦身版,表结构相同,但删除了冗余数据,用于提高回测运行速度。.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Filter stock data
- Returns the sell price
- Get price
- Get portfolio
JXQuant Key Features
JXQuant Examples and Code Snippets
Community Discussions
Trending Discussions on SQL Database
QUESTION
Not really sure what caused this but most likely exiting the terminal while my rails server which was connected to PostgreSQL database was closed (not a good practice I know but lesson learned!)
I've already tried the following:
- Rebooting my machine (using MBA M1 2020)
- Restarting PostgreSQL using homebrew
brew services restart postgresql
- Re-installing PostgreSQL using Homebrew
- Updating PostgreSQL using Homebrew
- I also tried following this link but when I run
cd Library/Application\ Support/Postgres
terminal tells me Postgres folder doesn't exist, so I'm kind of lost already. Although I have a feeling that deleting postmaster.pid would really fix my issue. Any help would be appreciated!
ANSWER
Answered 2022-Jan-13 at 15:19My original answer only included the troubleshooting steps below, and a workaround. I now decided to properly fix it via brute force by removing all clusters and reinstalling, since I didn't have any data there to keep. It was something along these lines, on my Ubuntu 21.04 system:
QUESTION
I am using Go's MongodDB driver (https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.8.0/mongo#section-documentation) and want to obtain the version of the mongoDB server deployed.
For instance, if it would been a MySQL database, I can do something like below:
...ANSWER
Answered 2022-Mar-26 at 08:04The MongoDB version can be acquired by running a command, specifically the buildInfo
command.
Using the shell, this is how you could do it:
QUESTION
What am I trying to do?
Django does not support setting enum data type in mysql database. Using below code, I tried to set enum data type.
Error Details
_mysql.connection.query(self, query) django.db.utils.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'NOT NULL,
created_at
datetime(6) NOT NULL,user_id
bigint NOT NULL)' at line 1")
Am I missing anything?
Enumeration class with all choices
...ANSWER
Answered 2021-Sep-29 at 19:39You can print out the sql for that migration to see specifically whats wrong, but defining db_type
to return "enum"
is definitely not the right way to approach it.
QUESTION
I am having difficulties to scaffold an existing MySQL database using EF core. I have added the required dependencies as mentioned in the oracle doc:
...ANSWER
Answered 2021-Dec-12 at 10:11I came across the same issue trying to scaffold an existing MySQL database. It looks like the latest version of MySql.EntityFrameworkCore (6.0.0-preview3.1) still uses the EFCore 5.0 libraries and has not been updated to EFCore 6.0.
It also seems Microsoft.EntityFrameworkCore.Diagnostics was last implemented in EFCore 5 and removed in 6.
When I downgraded all the packages to the 5 version level, I was able to run the scaffold command without that error.
QUESTION
I'm using Lambda with RDS Proxy to be able to reuse DB connections to a MySQL database.
Should I close the connection after executing my queries or leave it open for the RDS Proxy to handle?
And if I should close the connection, then what's the point of using an RDS Proxy in the first place?
Here's an example of my lambda function:
...ANSWER
Answered 2021-Dec-11 at 18:10The RDS proxy sits between your application and the database & should not result in any application change other than using the proxy endpoint.
Should I close the connection after executing my queries or leave it open for the RDS Proxy to handle?
You should not leave database connections open regardless of if you use or don't use a database proxy.
Connections are a limited and relatively expensive resource.
The rule of thumb is to open connections as late as possible & close DB connections as soon as possible. Connections that are not explicitly closed might not be added or returned to the pool. Closing database connections is being a good database client.
Keep DB resources tied up with many open connections & you'll find yourself needing more vCPUs for your DB instance which then results in a higher RDS proxy price tag.
And if I should close the connection, then what's the point of using an RDS Proxy in the first place?
The point is that your Amazon RDS Proxy instance maintains a pool of established connections to your RDS database instances for you - it sits between your application and your RDS database.
The proxy is not responsible for closing local connections that you make nor should it be.
It is responsible for helping by managing connection multiplexing/pooling & sharing automatically for applications that need it.
An example of an application that needs it is clearly mentioned in the AWS docs:
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources.
To prevent any doubt, also feel free to check out an AWS-provided example that closes connections here (linked to from docs), or another one in the AWS Compute Blog here.
QUESTION
I have a PostgreSQL database hosted on Heroku which is throwing me this error that I can't wrap my head around.
...ANSWER
Answered 2022-Feb-13 at 22:21AUTOINCREMENT
is not a valid option for CREATE TABLE
in Postgres
You can use SERIAL
or BIGSERIAL
:
QUESTION
Our stack is nodejs with MySQL we're using MySQL connections pooling our MySQL database is managed on AWS aurora . in case of auto failover the master DB is changed the hostname stays the same but the connections inside the pool stays connected to the wrong DB. The only why we found in order to reset the connection is to roll our servers.
this is a demonstration of a solution I think could solve this issue but I prefer a solution without the set interval
...ANSWER
Answered 2022-Feb-04 at 12:22Instead of manually monitoring the DB health, as you have also hinted, ideally we subscribe to failover events published by AWS RDS Aurora.
There are multiple failover events listed here for the DB cluster: Amazon RDS event categories and event messages
You can use and test to see which one of them is the most reliable in your use case for triggering poolCluster.end()
though.
QUESTION
I am trying to run a server with a MySQL Database, however I keep getting this huge error and I am not sure why.
...ANSWER
Answered 2021-Aug-11 at 14:38Maybe a solution. Source : https://dba.stackexchange.com/questions/8239/how-to-easily-convert-utf8-tables-to-utf8mb4-in-mysql-5-5
Change your CHARACTER SET AND COLLATE to utf8mb4.
For each database:
QUESTION
I am following this tutorial on migrating data from an oracle database to a Cloud SQL PostreSQL instance.
I am using the Google Provided Streaming Template Datastream to PostgreSQL
At a high level this is what is expected:
- Datastream exports in Avro format backfill and changed data into the specified Cloud Bucket location from the source Oracle database
- This triggers the Dataflow job to pickup the Avro files from this cloud storage location and insert into PostgreSQL instance.
When the Avro files are uploaded into the Cloud Storage location, the job is indeed triggered but when I check the target PostgreSQL database the required data has not been populated.
When I check the job logs and worker logs, there are no error logs. When the job is triggered these are the logs that logged:
...ANSWER
Answered 2022-Jan-26 at 19:14This answer is accurate as of 19th January 2022.
Upon manual debug of this dataflow, I found that the issue is due to the dataflow job is looking for a schema with the exact same name as the value passed for the parameter databaseName
and there was no other input parameter for the job using which we could pass a schema name. Therefore for this job to work, the tables will have to be created/imported into a schema with the same name as the database.
However, as @Iñigo González said this dataflow is currently in Beta and seems to have some bugs as I ran into another issue as soon as this was resolved which required me having to change the source code of the dataflow template job itself and build a custom docker image for it.
QUESTION
I have to move a large Odoo(v13) database almost 1.2TB(DATABASE+FILESTORE), I can't use the UI for that(keeps loading for 10h+ without a result) and I dont want to only move postgresql database so I need file store too, What should I do? extract db and copy past the filestore folder? Thanks a lot.
...ANSWER
Answered 2022-Jan-14 at 16:59You can move database and filestore separately. Move your Odoo PostgreSQL database with normal Postgres backup/restore cycle (not the Odoo UI backup/restore), this will copy the database to your new server. Then move your Odoo filestore to new location as filesystem level copy. This is enough to get the new environment running.
I assume you mean moving to a new server, not just moving to a new location on same filesystem on the same server.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install JXQuant
You can use JXQuant like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page