odbc | Connect to ODBC databases | SQL Database library
kandi X-RAY | odbc Summary
kandi X-RAY | odbc Summary
The goal of the odbc package is to provide a DBI-compliant interface to Open Database Connectivity (ODBC) drivers. This allows for an efficient, easy to setup connection to any database with ODBC drivers available, including SQL Server, Oracle, MySQL, PostgreSQL, SQLite and others. The implementation builds on the nanodbc C++ library.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of odbc
odbc Key Features
odbc Examples and Code Snippets
Community Discussions
Trending Discussions on odbc
QUESTION
I'm having issues getting the ODBC driver for Snowflake to work on an M1 Apple Silicon Mac running Big Sur.
Successfully following the instructions on Snowflake's website gets me to the point where testing the driver from the command line (using iodbctest) using the DSN results in the following error:
...ANSWER
Answered 2021-Sep-21 at 20:50Big Sur is macOS v11.n
Snowflake supports macOS 10.14 and 10.15 Supported OSs
So what you are trying to do is not supported and is unlikely to work
QUESTION
What approach should I follow to download DDL, DML and Stored Procedures from the teradata database using python.
I have created the sample code but what is the approach to download these sql files for data migration process.
...ANSWER
Answered 2022-Mar-23 at 11:14Happy to share that I got the solution for this approach. In order to get the files in sql format use the given code to extract DDL and DML Code.
The given code is for sample database dbc.
QUESTION
I need to upload a file (<10 MB) around once a week to a SQL Server 2016 database on a remote server in the same network. Until now it was all within a Access FE/BE but I want to migrate to SQL Server as backend.
The attachments I had in MS Access so need to be handled now on the SQL database as I do not want to do this on a fileshare.
I found many threads about using something like this from SQLShack
...ANSWER
Answered 2022-Mar-13 at 23:33I may have found a solution using the links from @AlwaysLearning. The first sub actually answers my question to upload a file to a remote FILESTREAM SQL Server. The second sub downloads all uploaded files into a given directory.
QUESTION
I am using an ODBC source connected to the Hadoop system and read the column PONum
with value 4400023488
of datatype Text_Stream DT_Text]
. Data is converted into string [DT_WSTR]
using a data conversion transformation and then inserted into SQL Server using an OLE DB Destination. (destination column's type is a Unicode string DT_WSTR
)
I am able to insert Value to SQL Server table but with an incorrect format 㐴〰㌵㠵㔹
expected value is 4400023488
.
Any suggestions?
...ANSWER
Answered 2022-Mar-13 at 20:04I have two suggestions:
- Instead of using a data conversion transformation, use a derived column that convert the
DT_TEXT
value toDT_STR
before converting it to unicode:
QUESTION
I have an issue with inserting data into a database using the python package pyodbc and since I am pretty new to pyodbc & databases in general, I might lack some basic understanding.
I open a connection, and then I want the execute my query. Actually, in this query I call a stored procedure (which I didn't write and I am not allowed to change!). This procedure does "one or two" inserts. When I use pyodbc like this
...ANSWER
Answered 2022-Mar-08 at 16:07You seem to have encountered a quirk in MySQL Connector/ODBC's handling of result sets from a stored procedure. For this example procedure:
QUESTION
I'm trying to load a table from Oracle to Postgres using SSIS, with ~200 million records. Oracle, Postgres, and SSIS are on separate servers.
Reading data from OracleTo read data from the Oracle database, I am using an OLE DB connection using "Oracle Provider for OLE DB". The OLE DB Source is configured to read data using an SQL Command.
In total there are 44 columns, mostly varchar, 11 numeric, and 3 timestamps.
Loading data into PostgresTo lead data into Postgres, I am using an ODBC connection. The ODBC destination component is configured to load data in batch mode (not row-by-row insertion).
SSIS configurationI created an SSIS package that only contains a straightforward Data Flow Task.
IssueThe load seems to take many hours to reach even a million count. The source query is giving results quickly while executing in SQL developer. But when I tried to export it threw limit exceeded error.
In SSIS, when I tried to preview the result of the Source SQL command it returned: The system cannot find message text for message number 0x80040e51 in the message file for OraOLEDB. (OraOLEDB)
Noting that the source(SQL command) and target table don't have any indexes.
Could you please suggest any methods to improve the load performance?
...ANSWER
Answered 2022-Feb-20 at 09:22I will try to give some tips to help you improve your package performance. You should start troubleshooting your package systematically to find the performance bottleneck.
Some provided links are related to SQL Server. No worries! The same rules are applied in all database management systems.
1. Available resourcesFirst, you should ensure that you have sufficient resources to load the data from the source server into the destination server.
Ensure that the available memory on the source, ETL, and destination servers can handle the amount of data you are trying to load. Besides, make sure that your network connection bandwidth is not decreasing the data transfer performance.
Make sure that the following hardware issues are not occurring in any of the servers:
- Drive out of storage
- Server is out of memory
Make sure that your machine is not running out of memory. You can simply use the Task Manager to identify the amount of available memory.
2. The Data Source Make sure that the table is not a heapAfter checking the available resources, you should ensure that your data source is not a heap. At least it would be best if you had a primary key created on that table.
IndexesIf your SQL Command contains any filtering, ordering, or Joins, you should create the indexes needed by those operations.
- Use WHERE, JOIN, ORDER BY, SELECT Column Order When Creating Indexes
- Is the WHERE-JOIN-ORDER-(SELECT) rule for index column order wrong?
Instead of using OLE DB source to connect to Oracle, try using the Microsoft Connector for Oracle (Previously known as Attunity connectors). Microsoft previously mentioned that it should provide faster performance than traditional OLE DB providers.
Use the Oracle connection manager rather than OLE DB connection manager.
To be honest, I am not sure if this component can make the data load faster since I didn't test it before
Removing the destination and adding a dummy taskThe last thing to try is to remove the ODBC destination and add any dummy task. For example, use a row count component.
Run the package; if the data is loaded faster, then loading data from Oracle is not decreasing the performance.
3. SSIS configurationNow, let us start troubleshooting the SSIS package.
Running in 64-bit modeFirst, try to execute the package in 64-bit mode. You can change this from the package configuration. Make sure that the Run64bitRuntime
property is set to True
.
Using SSIS, data is loaded in memory while being transferred from source to destination. There are two properties in the data flow task that specifies how much data is transferred in memory buffers used by the SSIS pipelines.
Based on the following Integration Services performance tuning white paper:
DefaultMaxBufferRows – DefaultMaxBufferRows is a configurable setting of the SSIS Data Flow task that is automatically set at 10,000 records. SSIS multiplies the Estimated Row Size by the DefaultMaxBufferRows to get a rough sense of your dataset size per 10,000 records. You should not configure this setting without understanding how it relates to DefaultMaxBufferSize.
DefaultMaxBufferSize – DefaultMaxBufferSize is another configurable setting of the SSIS Data Flow task. The DefaultMaxBufferSize is automatically set to 10 MB by default. As you configure this setting, keep in mind that its upper bound is constrained by an internal SSIS parameter called MaxBufferSize which is set to 100 MB and can not be changed.
You should try to change those values and test your package performance each time you change them until the package performance increases.
4. Destination Indexes/Triggers/ConstraintsYou should make sure that the destination table does not have any constraints or triggers since they significantly decrease the data load performance; each batch inserted should be validated or preprocessed before storing it.
- Are SQL Server database triggers evil?
- The benefits, costs, and documentation of database constraints
Besides, the more you have indexes the lower is the data load performance.
ODBC Destination custom propertiesODBC destination has several custom properties that can affect the data load performance such as BatchSize
(Rows per batch), TransactionSize
(TransactionSize is only available in the advanced editor).
QUESTION
I believe I should be able to do select * into #temptable from othertable
(where #temptable
does not previously exist), but it does not work. Assuming that othertable
exists and has valid data, and that #sometemp
does not exist,
ANSWER
Answered 2022-Feb-19 at 20:28There are few different approaches:
- Use the
immediate
arg in yourDBI::dbExecute
statement
QUESTION
I'm just getting into BIML and have written some Scripts to creat a few DTSX-Packages. In general the most things are working. But one thing makes me crazy.
I have an ODBC-Source (PostgreSQL). From there I'm getting data out of a table using an ODBC-Source. The table has a text-Column (Name of the column is "description"). I cast this column to varchar(4000) in the query in the ODBC-Source (I know that there will be truncation, but it's ok). If I do this manually in Visual Studio the Advanced Editor of the ODBC-Source is showing "Unicode string [DT_WSTR]" with a Length of 4000 both for the External and the Output-Column. So there everything is fine. But if I do the same things with BIML and generate the SSIS-Package the External-Column will still say "Unicode string [DT_WSTR]" with a Length of 4000, but the Output-Column is telling "Unicode text stream [DT_NTEXT]". So the mapping done by BIML differs from the Mapping done by SSIS (manually). This is causing two things (warnings):
- A Warning that metadata has changed and should be synced
- And a Warning that the Source uses LOB-Columns and is set to Row by Row-Fetch..
Both warnings are not cool. But the second one also causes a drasticaly degredation in Performance! If I set the cast to varchar(255) the Mapping is fine (External- and Output-Column is then "Unicode string [DT_WSTR]" with a Length of 255). But as soon as I go higher, like varchar(256) it's again treated as [DT_NTEXT] in the Output.
Is there anything I can do about this? I invested days in the Evaluation of BIML and find many things an increase in Quality of Life, but this issue is killing it. It defeats the purpose of BIML if I have to correct the Errors of BIML manually after every Build.
Does anyone know how I can solve this Issue? A correct automatic Mapping between External- and Output-Columns would be great, but at least the option to define the Mapping myself would be ok.
Any Help is appreciated!
Greetings Marco
Edit As requested a Minimal Example for better understanding:
- The column in the ODBC Source (Postegres) has the type "text" (Columnname: description)
- I select it in a ODBC-Source with this Query (DirectInput):
SELECT description::varchar(4000) from mySourceTable
- The ODBC-Source in Biml looks like this:
SELECT description::varchar(4000) from mySourceTable
- If I now generate the dtsx-Package the ODBC-Source throws the above mentioned warnings with the above mentioned Datatypes for External and Output-Column
ANSWER
Answered 2022-Feb-18 at 07:48As mentioned in the comment before I got an answer from another direction:
You have to use DataflowOverrides in the ODBC-Source in BIML. For my example you have to do something like this:
QUESTION
Using the profiler on SQL Server
to monitor a stored procedure
call via DBI
/odbc
, shows that dynamic SQL / prepared statement is generated :
ANSWER
Answered 2022-Feb-16 at 22:26I found what I was looking for in odbc
package documentation : direct execution.
The odbc package uses Prepared Statements to compile the query once and reuse it, allowing large or repeated queries to be more efficient. However, prepared statements can actually perform worse in some cases, such as many different small queries that are all only executed once. Because of this the odbc package now also supports direct queries by specifying
immediate = TRUE
.
This will use a prepared statement:
QUESTION
I am trying to make sense of the following error that I started getting when I setup my python code to run on a VM server, which has 3.9.5 installed instead of 3.8.5 on my desktop. Not sure that matters, but it could be part of the reason.
The error
...ANSWER
Answered 2022-Feb-11 at 16:30Is pyodbc becoming deprecated?
No. For at least the last couple of years pandas' documentation has clearly stated that it wants either a SQLAlchemy Connectable
(i.e., an Engine
or Connection
object) or a SQLite DBAPI connection. (The switch-over to SQLAlchemy was almost universal, but they continued supporting SQLite connections for backwards compatibility.) People have been passing other DBAPI connections (like pyodbc Connection
objects) for read operations and pandas hasn't complained … until now.
Is there a better way to achieve similar results without warning?
Yes. You can take your existing ODBC connection string and use it to create a SQLAlchemy Engine
object as described here:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install odbc
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page