FlatFiles | writes CSV , fixed-length and other flat file formats | CSV Processing library
kandi X-RAY | FlatFiles Summary
kandi X-RAY | FlatFiles Summary
Plain-text formats primarily come in two variations: delimited (CSV, TSV, etc.) and fixed-width. FlatFiles comes with support for working with both formats. Unlike most other libraries, FlatFiles puts a focus on schema definition. You build and pass a schema to a reader or writer and it will use the schema to extract or write out your values. A schema is defined by specifying what data columns are in your file. A column has a name, a type and an ordinal position in the file. The order matches whatever order you add the columns to the schema, so you're left just specifying the name and the type. Beyond that, you have a lot of control over the parsing/formatting behavior when reading and writing, respectively. Most of the time, the out-of-the-box options will just work, too. But when you need that level of extra control, you don't have to bend over backward to work around the API, like with many other libraries. FlatFiles was designed to make handling oddball edge cases easier. If you are working with data classes, defining schemas is even easier. You can use the type mappers to map your properties directly. This saves you from having to specify column names or types, since both can be derived from the property. For those working with ADO.NET, there's even support for DataTables and IDataReader. If you really want to, you can read and write values using raw object[].
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of FlatFiles
FlatFiles Key Features
FlatFiles Examples and Code Snippets
Community Discussions
Trending Discussions on FlatFiles
QUESTION
I came across this service from stackoverflow
I believe the source is from a database. How do I build an Xml to spit me out data in similar format?
I use the below logical lines
xmldoc.Load(xmlFileName); Newtonsoft.Json.JsonConver.SerializeXmlNode(xmldoc);
Any recommendation of how to build the Xml which is a reverse process? My solutions are heavily dependant on Xml and flatFiles
...ANSWER
Answered 2022-Feb-08 at 11:22According to https://api.stackexchange.com/docs the StackExchange API only supports JSON output, not XML. So you will have to convert the JSON to XML.
My own preference is to do this conversion "by hand" using XSLT 3.0 rather than using a standard library, because standard libraries often give you XML that's rather difficult to work with.
QUESTION
I have a flatfile resources that were extracted into facts and dimensions. Some dimensions also comes from db resources. The transformation process is set on as needed basis (if there are new/updated from flatfiles). The problem is this, some data reference doesn't exist or match on the dimension based on db resources so the foreign key id value on the fact is set to default (zero if no matching data).
How can i perform an update on the facts if the said dimension (db resource) has been updated? What was the best practice/routine for this kind of scenario?
This is the sample illustration
...ANSWER
Answered 2021-May-22 at 06:51Based on your example, this is the way it should work:
Note: I would expect prodcode to be be in flatfile, not product name. Is this really how your data looks? Anyway I will proceed.
First set of data arrives. Watermelon is in fact but not dimension.
QUESTION
I have a CSV file with the two columns :
...ANSWER
Answered 2020-Jun-03 at 21:15You have defined your EmpID field as being Int64 which will work great when you have a digits there but in the case of no data (but a row still being present), SSIS is going to try to convert the empty string to a number and that will fail.
If you add an error pathway from the Flat File Source for truncation/error/etc you'd see rows 5+ going down that path. For this data, I'd define everything as string as you need to get the data into the pipeline and then you'll need to take action on it based on whatever business rules make sense (no name/id, trash it).
As @alex points out in the comment, the final rows indicate there are three columns of data whereas you've defined two so when the flat file source gets to that, you'll blow up. SSIS won't be able to handle inconsistent file formats like that.
QUESTION
Version of Oracle 12c Enterprise Edition release 12.1.0.2.0
Current process.
Database 1 I have two Cursor SQL queries (which join a number of tables) which basically write to a flatfile (both files have a similar file format) using a PL/SQL for loop. A number of Flatfile files are created and written to a destination directory.
Database 2 picks up the flatfiles from the destination directory and processes each flat file into it's system.
The writing of a number of large files to a directory from one database and then to be processed into a second database can I'm sure be time consuming and the company is looking at ways to improve this performance. This process happens one a month and between 200 to 1500 files are created. Each file could be 100k to 5gig in size.
New Process.
I have been asked to look into creating a new solution to make this process quicker.
The questions I am faced with any solution as a developer are the following a) Is this quicker? b) could this be done in PL/SQL script c) What problems could I face if I tried this? d) Is there a better solution? e) Any performance/system issues with this approach?
1. Transportable tables - could a staging table be created in database 1 where I bulk collect all the data from both SQL queries queries into one staging table. I then I use the tablespace where the staging table exists and transport that tablespace to database 2 to be used to process into database 2. The tablespace would then be deleted from database 2 after a week. I clear out the staging table from database 1 after a week too.
2. DataPump - I pretty unsure about datapump as your writing exporting DMP file (maybe using a query to select the data needed) out to a directory, then picking up that DMP file to be imported into the new database I'm assuming it would create a staging table in the new system ready for it to be processed into the main tables. This could be a large dump file, would that be a problem?
3. Golden gate - I'm not sure on Golden Gate, isn't this just a replication tool. Not sure where to go with this tool.
4. A view - Create a view on database 1 (could this be a materlized view?) which would hold both the SQL Queries (UNION ALL) , the second database would call this view using a database link to process the data into the second database. Would there be any problems with reading this data over a network?
Any Ideas would be great? Has anyone had any experience with the above? Is there a better solution than the above I need to look at?
thanks Shaun
...ANSWER
Answered 2020-Jan-29 at 19:53I would definately go for option # 4 - getting all the data via a DB link. I will almost guarantee that it will be the fastest. Create a view in the source DB (could be an MVIEW if you need to run the query many times), and then do either a DROP TABLE and CREATE TABLE AS SELECT or TRUNCATE TABLE and INSERT INTO .. SELECT statement depending on your needs. Both the CTAS and IAS can utilise parallel capabilities.
A datapump import (option # 2) could be an option if option # 4 for some reason is not doable. In this case you should look into doing a datapump import via a database link. It makes the process much more simple.
If transferring the data between the two databases becomes a bottleneck you could look into using compression (check your licenses in that case).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install FlatFiles
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page