:( I’m sorry to hear that. Well, for Android there’s MaterialFiles, which is fully FLOSS and supports FTP, SFTP, and SMB. Not sure about iOS, but I imagine there are options there.
I hope that your journey through life becomes a little less rocky.
:( I’m sorry to hear that. Well, for Android there’s MaterialFiles, which is fully FLOSS and supports FTP, SFTP, and SMB. Not sure about iOS, but I imagine there are options there.
I hope that your journey through life becomes a little less rocky.
There are definitely a lot of good options out there. What are you using right now for regular old FTP? The odds are actually pretty good that it already supports SFTP. A lot of file management applications do both and lump them together, even though they’re completely different protocols (sftp is from the late nineties).
If it doesn’t, then I don’t know what OS you’re using, so I’ll just recommend options for the big 3. For Windows, there’s WinSCP. For MacOS there’s Cyberduck. Most file managers on Linux distros let you just type sftp://me@wherever
in the navigation bar, meaning you get a totally seamless experience with the rest of your FS.
EDIT: or, you can use sshfs-win on Windows and have your remote filesystem show up as a regular ol’ drive, just like SMB. MacOS and Linux have sshfs, and I know there are GUIs wrapping sshfs on those platforms. I personally use sshfs at home and it’s great (although no GUI wrapper, I’m a weirdo who doesn’t use a graphical file manager at all).
You should be able to just use ssh/sftp. There are lots of great clients, and you can absolutely still use usernames and passwords, no public/private key stuff required. You can even use ssh
and scp
right from powershell on Windows boxen if you’re so inclined. There’s winscp, and if you want filesystem mounting, there’s this: https://github.com/winfsp/sshfs-win
For macos and Linux, the options are far more plentiful.
Edit: there’s also file pizza, which is a file transfer thingy with no middle man that’s open source, although it’s not copyleft AFAICT: https://github.com/kern/filepizza
and similar tools. Not really what you’re after, I just think it’s neat.
PART 4.
You expect a file transfer program to reliably and faithfully transfer your files, byte-for-byte, from one system to another. FTP spits in your face and shits on your chest. You know how Linux uses LF (i.e. \n
) for newlines and Windows uses CRLF (i.e. \r\n
) for newlines? Pretty annoying, right? Well, FTP’s ASCII mode will automatically rip off those \r
characters for you! Sounds pretty sweet, right? Fuck no it’s not. All of the sudden, your file checksums have changed. If you pass the same file back to a Windows user with a different and more sane file transfer system, then they get a broken file because FTP didn’t mind its own fucking business. If you have a CRLF file and need an LF file, just explicitly use dos2unix
. Wanna go the other way? unix2dos
. The tool has been around since 1989 and it’s great.
Now, what if you’re not transferring text, but instead are transferring a picture of a cute cat? What if your binary data happens to have 0x0D0x0A somewhere in it? Well, ASCII mode will happily translate that to 0x0A and fucking ruin your adorable cat picture that you were going to share with your depressed significant other in an attempt to cheer them up. Now the ruined JPEG will remind them of the futility of their situation and they’ll slide even deeper into cold emptiness. Thanks, FTP.
You can tell your client to use binary mode and this problem goes away! In fact, modern clients do this automatically so your SO gets to see the adorable fuzzy cat picture. But let’s just stop and think about this. Why use a protocol that is dangerous by default? Why use a protocol that supports no form of security (unless you’re using fucking godawful FTPS or FTP over SSH)? Why use a protocol that is so broken by design that small business hardware has been designed to try to unfuck it? Is it faster? I mean, not really. SFTP has encryption/decryption overhead, but your CPU is so fast that you’d need to transfer at 25+ Gb/s to notice it. Is it easier? Fuck no it’s not easier, look at all of the stupid footguns I’ve just mentioned. Is it simpler? The line protocol is simple, but so is HTTP, and HTTP has a much simpler control flow path (merging the data and control planes is objectively the right thing to do in this context). And shit, you want a simple protocol for cases where you don’t have a lot of CPU power? Use fucking TFTP. It’s dogshit, but it was intentionally designed to be dogshit so that a fucking potato could receive data with it.
There is no task that is currently being done with FTP that couldn’t be done more easily, more securely, and more quickly with some other protocol (like fucking SSH and SFTP, which is now built into fucking Windows for god’s sake). Fuck FTP.
PART 3.
They made their STUPID MODEMS FUCK WITH THE FTP PACKETS. I have personally experienced this with Comcast Business. The stupid piece of shit DOCSIS modem they provide intercepts the FTP packet from your server saying “oh, connect to this address: x.x.x.x:44010” and they rewrite the fucking address to the public IP. There is no way to turn just this horse piss off. Now, for average business customers, this probably saved Comcast a bunch of money in support calls. However, if you’re using the so-called bridge mode on that degenerate piece of shit-wrapped-silicon (where rather than allowing the modem to give you a DHCP address, you just configure your system to have one of the addresses in the /29 space and the modem detects that and says oh okay don’t NAT traffic when it’s going to this address, just rewrite the MAC and shunt it over the right interface), then something funny happens. The modem still rewrites the contents of the packet, but it uses the wrong fucking IP address! Because the public IP that your server is running on is no longer available to the modem, the modem just chooses another fucking address. Then, the client tries to connect to 1.2.3.5 instead of 1.2.3.4 where your server is listening, the modem says “hey I’m 1.2.3.5 and you can fuck off, I’m dropping your SYN for port 44010”, and I get an angry call from the client asking why they can’t download their files using this worthless protocol. I remember having a conversation like this:
Me: “Just use SFTP on port 22!”
Client: “No! FTP is faster/more secure/good enough for my grandfather good enough for me/corporate won’t allow port 22.”
Me: “Comcast is fucking me right now. What if we lied and served SFTP over port 21?”
# we try it
Client: “It’s not working! I can’t even connect!”
I couldn’t connect either. I couldn’t connect to anything. Trying to do SFTP over port 21 caused the stupid fucking modem to CRASH.
Are you starting to see what the problem is? It’s like Microsoft preserving bugs in Windows APIs so that shitty software doesn’t break, and then they end up doing crazy gymnastics to accomodate old shit like the Windows 8 -> Windows 10 thing where they couldn’t use “Windows 9” because that would confuse software into thinking it was running “Windows 95” or “Windows 98”. FTP has some bugfuck crazy design decisions that we’ve collectively decided to just “work around,” and it leads to fucking gymnastics.
Speaking of bugfuck crazy design decisions, FTP’s default file transfer mode intentionally mangles data!
Continued in part 4.
PART 2.
NAT, much like the city of Phoenix, is a monument to man’s arrogance. Fuck NAT and fuck FTP. If your FTP server is listening directly on a public IP address hooked up directly to a proper router, then none of this applies. If you’re anything like me, the last company I worked for (a small startup), or my current company (many many thousands of employees making software you know and may or may not hate, making many billions of dollars a year), then the majority of your servers are living in RFC1918 space. Traffic from the internet is making it to them via NAT (or NAT with extra steps, i.e. L4 load balancers).
A request comes in for $PUBLIC_IP TCP port 21 and is forwarded to your failure of a boxen at 10.0.54.187. Your FTP server is a big stupid idiot and doesn’t know this. It thinks that it’s king shit and has its own public IP address. Therefore, when it’s deciding what ADDR:PORT it’s going to tell the stupid FTP client to connect to, it just looks at one of the adapters on the box and says “oh, I’ll tell this client on the internet to connect to 10.0.54.187:44007” and then I fucking cry. The FTP client is an idiot, but the IP stack on the client’s home/business router is not and says “oh, that’s an address living in RFC1918 space, I shouldn’t send that out over the internet” and they don’t get the results of their LIST.
So, how do you fix this? Well, you fix it by not using FTP. Use SFTP USE SFTP USE SFTP FOR GOD’S SAKE. But since this world is a shit fucking place, you have two options. The best option is to configure your FTP server to lie about its IP address. Rather than being honest about what a fool it is, you can tell it to send your public IP address to the client rather than the network adapter IP address. Does your public IP address change? Fuck you, you get to write a daemon that checks for that shit, rewrites your FTP server config, and HUPs the bastard (or SIGTERMs it if your server sucks and can’t do a live config reload).
Let’s say that you don’t want to do that. Let’s say you work at a small company with a small business internet plan that gives you static IPs but a shitty modem. Let’s say that you don’t know what FTP is or how it works and your boss told you to get it set up ASAP and it’s not working (because the client over in Bendoverville Arkansas is being told to connect to a 10.x.x.x address) and it surely must be your ISP’s fault. So you call up Comcast Business/AT&T/Verizon/Whoeverthefuck and you complain at their technicians for hours and hours, and eventually you get connected to a human that knows what the problem is and tells you how to configure your stupid FTP server to lie like a little sinner. The big telco megacorps don’t like that. They don’t want to waste all those hours, and they don’t want to hire too many people who can figure that shit out because it’s expensive. You wanna know what those fucking asshole companies did?
Continued in part 3.
I’d like to interject for a moment. What you’re referring to as FTP is, in fact, smelly hot garbage.
For context, I wrote this while waiting for a migraine to pass. I was angry at my brain for ruining my morning, and I like to shit on FTP. It’s fun to be hyperbolic. I don’t intend for this to be an attack on you, I was just bored and decided to write this ridiculous rant to pass the time.
I must once again rant about FTP. I’ve no idea if you’re serious about liking it or you’re just taking the piss, but seeing those three letters surrounded by whitespace reminds me of all the bad things in the world.
FTP is, as I’ve said, smelly hot garbage, and the infrastructure built to support FTP is even worse. Why? Well, one reason is that FTP has the most idiotic networking model conceivable. To see how crazy it is, let’s compare to a more sane protocol, like HTTP (for simplicity’s sake, I’ll do HTTP/1.1). First, you get the underlying transport protocol stuff and probably SSL. The HTTP client opens a connection from some local ephemeral port to the destination server on port 80/443/whatever and does all the normal protocol things (so syn->synack->ack and Client Hello -> Server Hello+server cert -> client kex+change cipher -> change cipher -> encrypted data). FTP does TCP too! Same same so far (minus SSL, unless you’re using FTPS). Next, the HTTP client goes like this:
GET /index.html HTTP/1.1
Host: www.whatever.the.fuck
# a bunch of other headers
and you know what fucking happens here? The fucking server responds with the data and a response code on the same goddamn TCP connection. You get a big, glorious response over the nice connection you established:
200 OK
# a bunch of headers and shit
HERE'S YOUR DAMN DATA NERD
So that’s nice, and the client you’re using to read this used that flow (or an evolution of that flow if you’re using HTTP/2 or HTTP/3). So what does FTP do? It does one of two really stupid things depending on whether you’re using active or passive mode. Active mode is the default for the protocol (although not the default for most clients), so let’s analyze that! First, your FTP client initiates a TCP connection to your server on port 21 (by default), and then the server just sends this:
<--- 220 Rebex FTP Server ready.
ok, that kinda came out of nowhere. You’re probably using a modern client that saves you from all of the godawful footguns, so it then asks the server what it supports:
---> FEAT
<--- 211-Supported extensions:
<--- AUTH TLS;SSL;
<--- CDUP
<--- CLNT
# A whole bunch of other 4 letter acronyms. If I was writing an FTP server, I'd make it swear at the user since there are a lot of fun 4 letter words
There’s some other bullshit we don’t care about right now, although highlights include sending the username and password in plain text. There’s also ASCII vs binary mode. WE’LL GET BACK TO THAT. :|
So then we want to do a LIST. You know what happens in active mode? Your computer opens up some random fucking TCP port. It then instructs the FTP server to CONNECT TO YOUR GODDAMN COMPUTER. Your computer is the server, and the other side is now the client. I would post a more detailed overview of the FTP commands, but most servers on the internet disable active mode because it’s a goddamn liability. All of the sudden, your computer has to be internet facing with open firewall ports, and that’s just a whole heap of shit.
I’m probably not blowing many minds right now because people know about this shit. I just want to mention that this is how FTP was built. The data plane and control plane are separate, and back in 19XX when this shit was invented, you could trust your fellows on ARPANET and NAT didn’t exist and sure HAM radio operators here’s the entire goddamn 44.0.0.0/8
block for you to do packet switched radio. A simple protocol for simple times, back before we knew what was good and what was bad.
So, active mode sucks! PASV is the future, and is the default on basically all modern clients and servers! Passive mode works exactly the same as the above, except when the client goes to LIST
, the server opens some random TCP port (I’ve often seen something like 44000-44010) and tells the client, “hey you, connect to 1.2.3.4:44000 to get you your tasty data.” Sounds great, right? Well, there’s a problem that I actually touched on in my last paragraph. Back when this dogshit was first squeezed out in the 70s, everyone had a public address. There were SO MANY addresses! 4 billion addresses? We’ll never use all of those! That is clearly not the case anymore. We don’t have enough addresses, and now we have this wonderful thing called NAT.
Continued in part 2.
Because greedy investors are gullible and want to make money from the jobs they think AI will displace. They don’t know that this shit doesn’t work like they’ve been promised. The C-levels at Gitlab want their money (gotta love publicly traded companies), and nobody is listening to the devs who are shouting that AI is great at writing security vulnerabilities or just like, totally nonfunctioning code.
lol, I’d love to see the fucking ruin of the world we’d live in if current LLMs replaced senior developers. Maybe it’ll happen some day, but in the meantime it’s job security! I get to fix all of the bugfuck crazy issues generated by my juniors using Copilot and ChatGPT.
I want to offer my perspective on the AI thing from the point of view of a senior individual contributor at a larger company. Management loves the idea, but there will be a lot of developers fixing auto-generated code full of bad practices and mysterious bugs at any company that tries to lean on it instead of good devs. A large language model has no concept of good or bad, and it has no logic. It’ll happily generate string-templated SQL queries that are ripe for SQL injection. I’ve had to fix this myself. Things get even worse when you have to deal with a shit language like Bash that is absolutely full of God awful footguns. Sometimes you have to use that wretched piece of trash language, and the scripts generated are horrific. Remember that time when Steam on Linux was effectively running rm -rf /*
on people’s systems? I’ve had to fix that same type of issue multiple times at my workplace.
I think LLMs will genuinely transform parts of the software industry, but I absolutely do not think they’re going to stand in for competent developers in the near future. Maybe they can help junior developers who don’t have a good grasp on syntax and patterns and such. I’ve personally felt no need to use them, since I spend about 95% of my time on architecture, testing, and documentation.
Now, do the higher-ups think the way that I do? Absolutely not. I’ve had senior management ask me about how I’m using AI tooling, and they always seem so disappointed when I explain why I personally don’t feel the need for it and what I feel its weaknesses are. Bossman sees it as a way to magically multiply IC efficiency for nothing, so I absolutely agree that it’s likely playing a part in at least some of these layoffs.
Thanks for pointing this out, I’ve updated my comment to get rid of the unnecessary distinction.
In addition to the fact that it’s not just English via hand gestures, I believe it’s done because sign language is speech, with all of the benefits that comes with. There are extra channels of communication present in sign language beyond just the words. There’s equivalents of tone and inflection, and (I beleive) even accents. Like, this video of this lady performing “Fuck You” in ASL is what made it click for me when I first saw it many years ago. She’s just so fucking expressive, in a way that subtitles could never be.
EDIT: changed my wording to be more accurate, since sign language literally is speech through a different medium. There’s no need to draw an unnecessary boundary.
I have an AKiTiO Node Titan eGPU enclosure with a GTX 1070 hooked up to an Ubuntu 22.04 laptop and it’s working pretty well. I’m doing PCI passthrough to an Arch Linux VM, since my company mandated that all Linux users must use Ubuntu. To stave off comments about this, I’ll say that it’s not just that I dislike Ubuntu. They’re requiring me to lock down so much stuff that I can’t do my job. Plus, the endpoint security sensor on the host plays absolute hell with anything that uses heavy multiprocessing. The GPU (with external monitors), second NVMe drive, mouse, keyboard, audio interface, microphone, webcam, 30 gigs of RAM, and 11 CPU cores are passed to the VM, and the host OS gets the laptop GPU + monitor and my continuing disdain.
I’ve been using this setup for a month. My experience thus far has been positive. I start the computer up with or without the GPU connected, connect the GPU if I haven’t yet, launch my VM via libvirt, and things just work. I really thought I’d have more problems with the GPU, but the USB passthrough stuff has been the truly problematic part (I can’t just pass the whole PCI USB controller for IOMMU reasons). It’s important to note that the GPU displays directly to external monitors. I think it’s possible to like, send the data back to your laptop screen? But I really didn’t want that.
(As an aside, the security people at my company have no problems with VMs lol. They know what I’ve done and they don’t seem to care).
As much as I despise Oracle and the lawn mower man known as Larry Ellison, I don’t think this is a problem. Oracle also had a lot to do with btrfs, and while that filesystem has problems, they’re not the sort of problems usually associated with Oracle (i.e. rapacious capitalistic practices like patent trolling and suing the fuck out of everyone all the time always). Oracle won’t own XFS, it’s owned by every single person who has ever contributed to that codebase.
I’m glad that my grumpy migraine ramblings brought someone some joy!