It's a great idea but perhaps needs to be executed a little better. Hoping the op reads these comments as us all just being grumpy sysops but takes the positives from them, and works on rip3 with better system-level safety.
And for everyone else, always worth remembering that author could have instead built: .
Instead they chose to build something, so let's not shit all over their intent
Not only that, but the comment could have been much better: It could be an issue on GH saying "the default location is insecure, please use ~/.cache" or whatnot.
Which, until recently, was losing file metadata, like extended attributes (in very old tmpfs versions also the timestamps could have been truncated).
Whenever exact file copies are desired, they must not pass through /tmp or through any other place where tmpfs is mounted, unless it is checked that the tmpfs version is new enough to preserve file metadata.
(In multi-user computers, it was usual to copy a file through /tmp for copies between users, because that might have been the only place where both users had access.)
Older versions of tmpfs did not support any extended attributes, then only certain system attributes were preserved, while all user attributes were deleted. Only one year or two ago tmpfs has been enhanced to preserve most extended attributes, with some size constraints. Many older systems still have tmpfs versions that lose the extended attributes.
Oh heck, good point. I forgot about that in my criticism, extended attributes. If you've gone through the effort of actually using SELinux, this could undo it... at least to the point of requiring 'restorecon' to reapply the context policy on deleted-then-restored files
And if you delete too much the trash folder on tmpfs will fill up to half your available memory. With how terrible linux is with swap, unintentionally deleting a lot of data while doing something memory-intensive may well cause your system to slow to a crawl, forcing you to either sit it out or reboot and lose your recycle bin
Don't forget 'systemd-oomd' who may try to help. Depending on the configuration of your distribution/desktop environment, there's a non-zero chance your whole DE gets whacked. I generally am a fan of 'systemd', but the collection of parts require attention. cgroups and such
Absolutely none of this is a concern for this project, though. I just find this-then-that stuff funny
Author acknowledges how the tmpdir can be a bad default, so that’s why you can change it :)
> Graveyard location.
> You can see the current graveyard location by running rip graveyard. If you have $XDG_DATA_HOME environment variable set, rip will use $XDG_DATA_HOME/graveyard instead of the $TMPDIR/graveyard-$USER.
> If you want to put the graveyard somewhere else (like ~/.local/share/Trash), you have two options, in order of precedence:
> Alias rip to rip --graveyard ~/.local/share/Trash
> Set the environment variable $RIP_GRAVEYARD to ~/.local/share/Trash.
> This can be a good idea because if the graveyard is mounted on an in-memory file system (as /tmp is in Arch Linux), deleting large files can quickly fill up your RAM. It's also much slower to move files across file systems, although the delay should be minimal with an SSD.
Kinda bad example since this tool doesn't follow xdg-trash spec per README. Tools that do (such as the file managers from the big DEs) use this directory.
That's not quite how things work, file permissions don't magically go way just because they land in tmp, this is only a problem if your file permissions are setup wrongly and implicitly relied on parent folders etc.
But then file permissions being "wrong" or relying on the parent folder is the norm...
I wonder if it does keep ACLs associated with the file.
This is a good practical example for the less experienced that just because it’s written in rust doesn’t mean it’s magically more secure. Who’s gonna write up the CVE?
Is it accurate to say that deleted files have their filenames and other metadata become readable by design? Or at least top-level ones (not necessarily subdirectories)?
I'm not sure if that's "by design" or a "bug" – I'm not really familiar with this tool. I agree that ideally it should copy the directory permission bits too, or be more restrictive about that in some other way.
It's always laudable when OSS projects get some love, but... I'm slightly put off by programs that try to be witty or funny (e.g. flags like --decompose and --seance)
It's mostly just the first. And to achieve the first people use languages that they're productive in, which is often Rust or Go due to being able to compile a binary and performance. Sometimes people use Python, Ruby, or Node, and that's okay too.
I'm building a general-purpose undo that will log and let you undo things like chmods, chown/chgrps, mv's and rm's. it will work with the recursive parameters of those utilities and shunt that out to `find` with `-exec` so it can track the undo of every individual file. it will use the xdg-trash spec for rm'ing files. I haven't pushed it up to github yet but I have test cases working locally. in particular it will handle idempotent updates properly so if you for example chown a file with the same user and group, it will be recorded as a no-op so that a later (untracked) change won't get overwritten when you undo a tracked change that would otherwise interfere with it.
it's just plain old bash, so nothing fancy like Rust, but it should work
A small psa if you're on Windows and like this tool want to focus on "ergonomics" and "performance" of deleting files, disabling real-time protection in the security center makes deleting large directory trees about 50% faster and reduces CPU usage by about 80% for me. It's wholly non-obvious to me why this is the case, considering that DeleteFile doesn't even use a file handle. Perhaps it acquires one internally, and that still triggers demand scanning of the file to be deleted?
Scanner needs to scan files being deleted to catch certain kinds of malware, and Windows blocks until it's actually scanned and deleted. Lots of file operations are like this on Windows. It makes filesystem operations seemingly easier to program/understand, but much much slower. I suspect the synchronisation assumptions it is allowing are also deeply ingrained in legacy code.
Any serious Windows app needs to spawn many threads to work around this performance issue when batch operating on lots of files.
Aliases are not expanded in shell scripts, unless they explicitly opt into it. Additionally, they are run in a non-interactive shell, so will not load your ~/.bashrc where you probably defined the alias.
Rust programmers are the punk rockers of the software industry. They're loud, militant, they like to draw attention to themselves, but in the end they can't really deliver anything beyond the same basic riffs.
The command line is for power users, and utilities designed for file manipulation like "rm" should operate as intended (in this case, removing files outright).
Users should understand the risks and benefits of using these powerful tools without unnecessary safeguards and constraints. Users who understand these tools should have the freedom to use them without unnecessary interruptions. If someone requires additional safety measures, they can still use the same "rm" utility which already supports options for added safety such as the "rm -i" command for interactive mode or use "mv" (which is designed for moving / renaming files); however, imposing constant prompts or silly defaults would be antithetical to the efficiency and speed that power users expect from command-line operations.
When I use "rm", I expect my files to be removed quickly and efficiently. I believe it is important to note that using "rm" does not actually erase the file's data from the disk; it removes the directory entry for the file and deallocates the inode associated with that file. This means that the data remains on the disk until it is overwritten, making it potentially recoverable. If I want to ensure that files are truly removed, I use "srm" (secure remove). The "srm" utility not only removes the file entry but also overwrites the data on disk multiple times with random patterns which means it truly gets removed (excluding edge cases related to specific file system behaviors such as those using journaling or copy-on-write mechanisms or maintain snapshots or copies of files, and so forth).
It's rm with a trash can that by default is in a terribly insecure location (/tmp is typically world read/writable).
EDIT: just tested it, it creates /tmp/graveyard-$USER with 0755.
It's a great idea but perhaps needs to be executed a little better. Hoping the op reads these comments as us all just being grumpy sysops but takes the positives from them, and works on rip3 with better system-level safety.
And for everyone else, always worth remembering that author could have instead built: .
Instead they chose to build something, so let's not shit all over their intent
Not only that, but the comment could have been much better: It could be an issue on GH saying "the default location is insecure, please use ~/.cache" or whatnot.
Good point. :) I definitely overanalyze to the point where I program a lot less extracurricular stuff. I admire people with the ability to just do it.
Here here!
The XDG trashcan would be a better place, and you'd even be able to restore files using whatever GUI file browser you use: https://specifications.freedesktop.org/trash-spec/latest/
It would be nice to know how and why the goals of this supposedly diverge from xdg-trash
Before I read this comment I thought "xdg-trash spec" in the README was bashing xdg.
[flagged]
God forbid `/tmp` is a different filesystem from wherever the original file resided in.
which, it usually is... tmpfs
Which, until recently, was losing file metadata, like extended attributes (in very old tmpfs versions also the timestamps could have been truncated).
Whenever exact file copies are desired, they must not pass through /tmp or through any other place where tmpfs is mounted, unless it is checked that the tmpfs version is new enough to preserve file metadata.
(In multi-user computers, it was usual to copy a file through /tmp for copies between users, because that might have been the only place where both users had access.)
Older versions of tmpfs did not support any extended attributes, then only certain system attributes were preserved, while all user attributes were deleted. Only one year or two ago tmpfs has been enhanced to preserve most extended attributes, with some size constraints. Many older systems still have tmpfs versions that lose the extended attributes.
Oh heck, good point. I forgot about that in my criticism, extended attributes. If you've gone through the effort of actually using SELinux, this could undo it... at least to the point of requiring 'restorecon' to reapply the context policy on deleted-then-restored files
So the trash bin leaves grime on things, great.
Self-clearing insecure recycle bin, but it's memory safe!!!
And if you delete too much the trash folder on tmpfs will fill up to half your available memory. With how terrible linux is with swap, unintentionally deleting a lot of data while doing something memory-intensive may well cause your system to slow to a crawl, forcing you to either sit it out or reboot and lose your recycle bin
Don't forget 'systemd-oomd' who may try to help. Depending on the configuration of your distribution/desktop environment, there's a non-zero chance your whole DE gets whacked. I generally am a fan of 'systemd', but the collection of parts require attention. cgroups and such
Absolutely none of this is a concern for this project, though. I just find this-then-that stuff funny
Author acknowledges how the tmpdir can be a bad default, so that’s why you can change it :)
> Graveyard location.
> You can see the current graveyard location by running rip graveyard. If you have $XDG_DATA_HOME environment variable set, rip will use $XDG_DATA_HOME/graveyard instead of the $TMPDIR/graveyard-$USER.
> If you want to put the graveyard somewhere else (like ~/.local/share/Trash), you have two options, in order of precedence:
> Alias rip to rip --graveyard ~/.local/share/Trash
> Set the environment variable $RIP_GRAVEYARD to ~/.local/share/Trash.
> This can be a good idea because if the graveyard is mounted on an in-memory file system (as /tmp is in Arch Linux), deleting large files can quickly fill up your RAM. It's also much slower to move files across file systems, although the delay should be minimal with an SSD.
>rip is a rust-based rm with a focus on safety, ergonomics, and performance
then puts the default trash location in /tmp ? knowing its a bad decision? kinda makes me leery of the rest with this kind of reasoning skills
Yeah, I can't disagree with this, knowing that something is a bad default and still using it isn't fantastic.
It would be much better to put the files somewhere else and clear old files whenever rip was run.
>~/.local/share/Trash
Kinda bad example since this tool doesn't follow xdg-trash spec per README. Tools that do (such as the file managers from the big DEs) use this directory.
That's not quite how things work, file permissions don't magically go way just because they land in tmp, this is only a problem if your file permissions are setup wrongly and implicitly relied on parent folders etc.
But then file permissions being "wrong" or relying on the parent folder is the norm...
I wonder if it does keep ACLs associated with the file.
Take this prior art for example:
Home directories and the root they're stored in aren't generally well-protected, trash directories should do so to avoid leaking.Per the XDG spec if this is open, you want sticky bits. More justice (and considerations) here: https://specifications.freedesktop.org/trash-spec/latest/
An interesting challenge or angle to this: space consumption, ownership in a real-world sense (ie: fingerpointing)
Anyone knows how something like GNOME or KDE handles it? I am guessing a directory in user's home should be good enough, right?
That is a neat easy challenge for my next CTF.
And sometimes mounted to a device with less storage
This is a good practical example for the less experienced that just because it’s written in rust doesn’t mean it’s magically more secure. Who’s gonna write up the CVE?
[flagged]
Warning, this software
> does not implement the xdg-trash spec or attempt to achieve the same goals
Second warning, deleted files become world readable by design
> Deleted files get sent to the graveyard (typically /tmp/graveyard-$USER
> Second warning, deleted files become world readable by design
This is not true. I just tested to be sure; the permissions of files are preserved.
Is it accurate to say that deleted files have their filenames and other metadata become readable by design? Or at least top-level ones (not necessarily subdirectories)?
I'm not sure if that's "by design" or a "bug" – I'm not really familiar with this tool. I agree that ideally it should copy the directory permission bits too, or be more restrictive about that in some other way.
Files by default are world readable, see umask. You tested the exception, a restricted file
It's moving files out of potentially-safe directories into definitely-not-safe
What if the original file was in a private directory?
It's always laudable when OSS projects get some love, but... I'm slightly put off by programs that try to be witty or funny (e.g. flags like --decompose and --seance)
I like the theme with a graveyard, decompose, perhaps not with seance though
I don't think this is solving the problem at the right abstraction level.
For example, you can also "delete" files by accidentally redirecting to them using the ">" operator in the shell.
Maybe some kind of snapshotting filesystem (+ tools) is a better way to deal with this.
I wonder what amount of this being pushed so much is
- the allure of improving on well established UNIX commands (always an interesting topic)
- and the "it's rust factor"
It's mostly just the first. And to achieve the first people use languages that they're productive in, which is often Rust or Go due to being able to compile a binary and performance. Sometimes people use Python, Ruby, or Node, and that's okay too.
Previous comment on the topic: https://news.ycombinator.com/item?id=41794221
I'm building a general-purpose undo that will log and let you undo things like chmods, chown/chgrps, mv's and rm's. it will work with the recursive parameters of those utilities and shunt that out to `find` with `-exec` so it can track the undo of every individual file. it will use the xdg-trash spec for rm'ing files. I haven't pushed it up to github yet but I have test cases working locally. in particular it will handle idempotent updates properly so if you for example chown a file with the same user and group, it will be recorded as a no-op so that a later (untracked) change won't get overwritten when you undo a tracked change that would otherwise interfere with it.
it's just plain old bash, so nothing fancy like Rust, but it should work
A small psa if you're on Windows and like this tool want to focus on "ergonomics" and "performance" of deleting files, disabling real-time protection in the security center makes deleting large directory trees about 50% faster and reduces CPU usage by about 80% for me. It's wholly non-obvious to me why this is the case, considering that DeleteFile doesn't even use a file handle. Perhaps it acquires one internally, and that still triggers demand scanning of the file to be deleted?
Scanner needs to scan files being deleted to catch certain kinds of malware, and Windows blocks until it's actually scanned and deleted. Lots of file operations are like this on Windows. It makes filesystem operations seemingly easier to program/understand, but much much slower. I suspect the synchronisation assumptions it is allowing are also deeply ingrained in legacy code.
Any serious Windows app needs to spawn many threads to work around this performance issue when batch operating on lots of files.
OK, in what way is it safer than the GNU or BSD alternative? If it has to do with logic, it could be implemented just fine.
Presumably the use of a trash bin
There's no scary pointers in the code.
I really liked the idea.
Played around with rip2 on my MacOS.
Unfortunately after playing with it for two minutes, the "undo" and "scance" functionality seemed to break:
I have opened a bug https://github.com/MilesCranmer/rip2/issues/54].
am I doing something wrong?
Except it's mv
To a potentially different Filesystem that's might not have enough space (and may be tmpfs and get Linux to start killing processes)
TBF it's a surprisingly not straightforward problem. I appreciate the effort and would like it to succeed.
It's cool that you can build this but in practical terms it's solving a problem that nobody has.
That's not true. rm is a known foot gun and commands like trash-cli already exist.
I use the (Python) trash-cli utility all the time.
But it's written in Rust.
I don’t know if I would use it but thanks for reviving a dead project and maintaining it
When someone does a zig implementation of rm, what will its headline announcement be?
I'm struggling to see the advantage of this over, for example, `trash-cli`
So, what will happen to all bash scripts once rm is aliased as suggested?
Aliases are not expanded in shell scripts, unless they explicitly opt into it. Additionally, they are run in a non-interactive shell, so will not load your ~/.bashrc where you probably defined the alias.
Nice to know, thank you
Cool project, hopefully you got a good grade!
I yearn for the day when Rust devs will be able to withhold mentioning that the software they made is, in fact, written in Rust.
Same for arch users :P
Rust programmers are the punk rockers of the software industry. They're loud, militant, they like to draw attention to themselves, but in the end they can't really deliver anything beyond the same basic riffs.
[flagged]
It’s for those that want nostalgia of cleaning up the family pc or your parents computer.
Nothing like having your parents call at 3 in the morning because their disk is full despite “deleting” the files
until you delete something you didn't mean to delete...
Then use a DE that moves the file into the trash.
Your desktop environment doesn't have any effect on the command line
I could not edit my comment in time, but I would also like to add that when it comes to family: they do use a DE, not the terminal or command line.
The command line is for power users, and utilities designed for file manipulation like "rm" should operate as intended (in this case, removing files outright).
Users should understand the risks and benefits of using these powerful tools without unnecessary safeguards and constraints. Users who understand these tools should have the freedom to use them without unnecessary interruptions. If someone requires additional safety measures, they can still use the same "rm" utility which already supports options for added safety such as the "rm -i" command for interactive mode or use "mv" (which is designed for moving / renaming files); however, imposing constant prompts or silly defaults would be antithetical to the efficiency and speed that power users expect from command-line operations.
When I use "rm", I expect my files to be removed quickly and efficiently. I believe it is important to note that using "rm" does not actually erase the file's data from the disk; it removes the directory entry for the file and deallocates the inode associated with that file. This means that the data remains on the disk until it is overwritten, making it potentially recoverable. If I want to ensure that files are truly removed, I use "srm" (secure remove). The "srm" utility not only removes the file entry but also overwrites the data on disk multiple times with random patterns which means it truly gets removed (excluding edge cases related to specific file system behaviors such as those using journaling or copy-on-write mechanisms or maintain snapshots or copies of files, and so forth).