unix | 1st Edition UNIX kernel sources from pdf document
kandi X-RAY | unix Summary
kandi X-RAY | unix Summary
Run ./simh.cfg which starts the pdp11 simulator. You should see this:. PDP-11 simulator V3.8-0 ./simh2.cfg> #!tools/pdp11 Unknown command Disabling CR Disabling XQ RF: buffering file in memory TC0: 16b format, buffering file in memory Listening on port 5555 (socket 7).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of unix
unix Key Features
unix Examples and Code Snippets
/* Pattern */ 'http://unix:SOCKET:PATH'
/* Example */ request.get('http://unix:/absolute/path/to/unix.socket:/request/path')
const getTimestamp = (date = new Date()) => Math.floor(date.getTime() / 1000);
getTimestamp(); // 1602162242
const fromTimestamp = timestamp => new Date(timestamp * 1000);
fromTimestamp(1602162242); // 2020-10-08T13:04:02.000Z
UnixDomainSocketAddress getAddress(Path socketPath) {
return UnixDomainSocketAddress.of(socketPath);
}
public static void main(String[] args) throws IOException, InterruptedException {
new UnixDomainSocketServer().runServer();
}
Community Discussions
Trending Discussions on unix
QUESTION
I would like to know of a fast/efficient way in any program (awk/perl/python) to split a csv file (say 10k columns) into multiple small files each containing 2 columns. I would be doing this on a unix machine.
...ANSWER
Answered 2021-Dec-12 at 05:22With your show samples, attempts; please try following awk
code. Since you are opening files all together it may fail with infamous "too many files opened error" So to avoid that have all values into an array and in END
block of this awk
code print them one by one and I am closing them ASAP all contents are getting printed to output file.
QUESTION
We have PHP web application that sends SMTP emails via authenticated smtp.office365.com. This has been working for at least a couple of years.
We are using PHP Mailer 5.2. We are forcing the crypto_method to STREAM_CRYPTO_METHOD_TLSv1_2_CLIENT.
Here's the weird thing. About 75% of the time it works fine. The rest of the time it reports SMTP ERROR: Password command failed: 421 4.7.66 TLS 1.0 and 1.1 are not supported. Please upgrade/update your client to support TLS 1.2.
Registered Stream Socket Transports is tcp, udp, unix, udg, ssl, sslv3, tls, tlsv1.0, tlsv1.1, tlsv1.2
How is it even possible that it works most of the time. If it were truly a TLS issue I'd expect it to fail 100% of the time.
...ANSWER
Answered 2021-Dec-15 at 14:22From Microsoft:
New submission error speedbump to be introduced
We are fully aware that many customers will not have noticed the multiple Message Center posts and blog posts, and are not aware of clients or devices that are still using TLS1.0 to submit messages. With this in mind, starting in September 2021, we will reject a small percentage of connections that use TLS1.0 for SMTP AUTH. Clients should retry as with any other temporary errors that can occur during submission. Over time we will increase the percentage of rejected connections, causing delays in sending that more and more customers should notice. The error will be:
421 4.7.66 TLS 1.0 and 1.1 are not supported. Please upgrade/update your client to support TLS 1.2. Visit https://aka.ms/smtp_auth_tls.
We intend to make a final announcement when we are ready to make the change to disable TLS1.0 and TLS1.1 for SMTP AUTH for the regular endpoint.
Additional documentation can be found here: Opt-in to Exchange Online endpoint for legacy TLS clients using SMTP AUTH
Exchange Transport Team
QUESTION
When I try to open psql
with this command:
ANSWER
Answered 2021-Oct-22 at 14:36Peer authentication means that the connection is only allowed if the name of the database user is the same as the name of the operating system user.
So if you run psql -U postgres
as operating system user root
or jimmy
, it won't work.
You can specify a mapping between operating system users and database users in pg_ident.conf
.
QUESTION
Although High Sierra is no longer supported by Homebrew, but I need to install llvm@13
formula as a dependency for other formulas. So I tried to install it this way:
ANSWER
Answered 2021-Nov-26 at 08:27Install llvm with debug mode enabled:
QUESTION
I was able to build a multiarch image successfully from an M1 Macbook which is arm64. Here's my docker file and trying to run from a raspberrypi aarch64/arm64 and I am getting this error when running the image: standard_init_linux.go:228: exec user process caused: exec format error
Editing the post with the python file as well:
...ANSWER
Answered 2021-Oct-27 at 16:58A "multiarch" Python interpreter built on MacOS is intended to target MacOS-on-Intel and MacOS-on-Apple's-arm64.
There is absolutely no binary compatibility with Linux-on-Apple's-arm64, or with Linux-on-aarch64. You can't run MacOS executables on Linux, no matter if the architecture matches or not.
QUESTION
I am working with WSL a lot lately because I need some native UNIX tools (and emulators aren't good enough). I noticed that the speed difference when working with NPM/Yarn is incredible.
I conducted a simple test that confirmed my feelings. The test was running npx create-react-app my-test-app
and the WSL result was Done in 287.56s.
while GitBash finished with Done in 10.46s.
.
This is not the whole picture, because the perceived time was higher in both cases, but even based on that - there is a big issue somewhere. I just don't know where. The project I'm working on uses tens of libraries and changing even one of them takes minutes instead of seconds.
Is this something that I can fix? If so - where to look for clues?
Additional info:
my processor: Processor AMD Ryzen 7 5800H with Radeon Graphics, 3201 Mhz, 8 Core(s), 16 Logical Processors
I'm running Windows 11 with all the latest updates to both the system and the WSL. The chosen system is Ubuntu 20.04
I've seen some questions that are somewhat similar like 'npm install' extremely slow on Windows, but they don't touch WSL at all (and my pure Windows NPM works fast).
the issue is not limited to NPM, it's also for Yarn
another problem that I'm getting is that file watching is not happening (I need to restart the server with every change). In some applications I don't get any errors, sometimes I get the following:
...
ANSWER
Answered 2021-Aug-29 at 15:40Since you mention executing the same files (with proper performance) from within Git Bash, I'm going to make an assumption here. Correct me if I'm wrong on this, and I'll delete the answer and look for another possibility.
This would be explained (and expected) if your files are stored on /mnt/c
(a.k.a. C:
, or /C
under Git Bash) or any other Windows drive, as they would likely need to be to be accessed by Git Bash.
WSL2 uses the 9P protocol to access Windows drives, and it is currently known to be very slow when compared to:
- Native NTFS (obviously)
- The ext4 filesystem on the virtual disk used by WSL2
- And even the performance of WSL1 with Windows drives
I've seen a git clone
of a large repo (the WSL2 Linux kernel Github) take 8 minutes on WSL2 on a Windows drive, but only seconds on the root filesystem.
Two possibilities:
If possible (and it is for most Node projects), convert your WSL to version 1 with
wsl --set-version 1
. I always recommend making a backup withwsl --export
first.And since you are making a backup anyway, you may as well just create a copy of the instance by
wsl --import
ing your backup as--version 1
(as the last argument). WSL1 and WSL2 both have their uses, and you may find it helpful to keep both around.See this answer for more details on the exact syntax..
Or just move the project over to somewhere under the WSL root, such as
/home/username/src/
.
QUESTION
In Excel VBA with the SQLite ODBC Driver, my simple SELECT query run against a single table retrieves 'long' integers (10 or more decimal places) incorrectly. How can these values be retrieved correctly, without truncation or whatever garbling is going on?
(PLEASE NOTE: the database structure/field definitions can't be modified — the database belongs to an open source application, Anki, and changing the structure would break the software.)
The particular table I'm querying contains (at least) several fields that can contain longer integer values (10 or more decimal places). The primary key ("id") contains Unix timestamp (datetime) values, with milliseconds, so the integer in the primary key field always occupies 13 decimal places.
Here is the table definition:
...ANSWER
Answered 2022-Jan-02 at 17:15As commented, essentially the issue derives from how to include the BigInt
parameter in connection string. While MSDN documentation appears to differ from implementation, key/value pairs should avoid whitespaces:
QUESTION
I am stuck in this problem. I am running cypress tests. When I run locally, it runs smoothly. when I run in circleCI, it throws error after some execution.
Here is what i am getting:
ANSWER
Answered 2021-Oct-21 at 08:53Issue resolved by reverting back cypress version to 7.6.0.
QUESTION
I'm using a small laptop to copy video files on location to multiple memory sticks (~8GB). The copy has to be done without supervision once it's started and has to be fast.
I've identified a serious boundary to the speed, that when making several copies (eg 4 sticks, from 2 cameras, ie 8 transfers * 8Gb ) the multiple Reads use a lot of bandwidth, especially since the cameras are USB2.0 interface (two ports) and have limited capacity.
If I had unix I could use tar -cf - | tee tar -xf /stick1 | tee tar -xf /stick2 etc which means I'd only have to pull 1 copy (2*8Gb) from each camera once, on the USB2.0 interface.
The memory sticks are generally on a hub on the single USB3.0 interface that is driven on different channel so write sufficently fast.
For reasons, I'm stuck using the current Win10 PowerShell.
I'm currently writing the whole command to a string (concatenating the various sources and the various targets) and then using Invoke-Process to execute the copy process while I'm entertaining and buying the rounds in the pub after the shoot. (hence the necessity to be afk).
I can tar cf - | tar xf a single file, but can't seem to get the tee functioning correctly.
I can also successfully use the microSD slot to do a single cameras card which is not as physically nice but is fast on one cameras recording, but I still have the bandwidth issue on the remaining camera(s). We may end up with 4-5 source cameras at the same time which means the read once, write many, is still going to be an issue.Edit: I've just advanced to play with Get-Content -raw | tee \stick1\f1 | tee \stick2\f1 | out-null . Haven't done timings or file verification yet....
Edit2: It seems like the Get-Content -raw works properly, but the functionality of PowerShell pipelines violates two of the fundamental Commandments of programming: A program shall do one thing and do it well, Thou shalt not mess with the data stream. For some unknown reason PowerShell default (and only) pipeline behaviour always modifies the datastream it is supposed to transfer from one stream to the next. Doesn't seem to have a -raw option nor does it seem to have a $session or $global I can set to remedy the mutilation.
How do PowerShell people transfer raw binary from one stream out, into the next process?
...ANSWER
Answered 2021-Dec-09 at 23:43May be not quite what you want (if you insist on using built in Powershell commands), but if you care about speed, use streams and asynchronous Read/Write. Powershell is a great tool because it can use any .NET classes seamlessly.
The script below can easily be extended to write to more than 2 destinations and can potentially handle arbitrary streams. You might want to add some error handling via try/catch there too. You may also try to play with buffered streams with various buffer size to optimize the code.
Some references:
-- 2021-12-09 update: Code is modified a little to reflect suggestions from comments.
QUESTION
So I was trying to convert my data's timestamps from Unix timestamps to a more readable date format. I created a simple Java program to do so and write to a .csv file, and that went smoothly. I tried using it for my model by one-hot encoding it into numbers and then turning everything into normalized data. However, after my attempt to one-hot encode (which I am not sure if it even worked), my normalization process using make_column_transformer failed.
...ANSWER
Answered 2021-Dec-09 at 20:59using OneHotEncoder is not the way to go here, it's better to extract the features from the column time as separate features like year, month, day, hour, minutes etc... and give these columns as input to your model.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install unix
Download the source code for the Simh simulator from here: http://simh.trailing-edge.com/. Make sure that you download version 3.8-0 or later; earlier versions need patches to work.
Unpack Simh somewhere. Make the BIN/ directory in Simh at the top level. Do make pdp11 to make the pdp11 simulator in the BIN/ directory. Copy the BIN/pdp11 executable into the tools/ directory.
Return to the 1st Edition top-level directory. Do a make. This will do several things. It will build tools/mkfs, tools/ml and tools/apout/apout. These tools are required to build the filesystems for 1e UNIX, and the kernel. It will create kernel sources with some necessary patches, assemble the kernel and build a bootable Simh memory image which is installed into the images directory. Finally, the make will build the rf0.dsk, rk0.dsk and tape images and install these in the images directory. You can also do a "make clean" to clean out the images/ and build/ directories. A "make clobber" will clean out the images/, build/ and tools/ directories.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page