As a person raised by GUIs, an extra visual confirmation and an extra prompt is a nice touch. I also like when the system says “Oh, is that a directory? No problem, I’ll give you the usual treatment.” You know what I mean?
alias ls='ls --group-directories-first --color=auto -w 120'
alias ll='exa --group-directories-first -l'
alias la='ll -a'
alias lt='ll --tree'
alias cp='cp --recursive --interactive --verbose --reflink=always'
alias mv='mv --interactive --verbose'
# custom pwd
# - replace $HOME with ~
# - make everything before the last '/' green, and everything after white and bold
# - alias to p
alias pwd="pwd | sed 's:$HOME:~:' | sed -E 's:(.*/)([^/]+):\x1b[32m\1\x1b[0m\x1b[1m\2\x1b[0m:'"
alias p="pwd"
# custom cd.
# - prints the new directory after cd'ing.
cd () {
command cd "$@" && p;
}
alias c="cd"
alias '..'='c ..'
alias '...'='c ../..'
# For the '~' alias, we want to use the original cd because printing '~'
# again would be redundant.
alias '~'='command cd'
# custom rm.
# adds a '-r' flag only if there is a single argument and that argument
# is a directory.
# This is because I want the behavior of -I (interactive) to be the default,
# but I also want to have the -r flag available when I need it without being
# prompted for single files.
function rm () {
if [ $# -eq 1 ] && [ -d "$1" ]; then
rm --verbose --interactive=once --recursive "$1";
else
rm --verbose --interactive=once "$@";
fi;
}
# mkdir + cd (created as a function because they run on the current shell,
# which is what we want for cd)
mc () {
mkdir -p -- "$1" && cd -P -- "$1";
}
The problem I have with this kind of thing is: I work on hundreds of different vms and containers and they can’t all be setup like this AND have root and system accounts be setup like this. So you get too used to it one place and forget its not there when trying to troubleshoot. These days i tend to try and keep my shell simple so my skills transfer easily anywhere.
You can source a script for this directly from github using curl and proccess substitution in order to temporarily have the config when and where you are without making it the default
I do the same with vim.
Edit: here’s the command:
source <(curl -s https://www.raw.githubusercontent.com/sorrybookbroke/bashingmyheadin/master/bashrc
Right? I wonder why this approach isn’t more common.
How do you do this with vim, btw? I’ve looked into it before but haven’t found a fully satisfying answer yet.
I use the command
vim -Nu <(curl https://raw.githubusercontent.com/sorrybookbroke/vim-config/master/init.vim)
Replace my name and the repos name with yours, and it’ll work (that repo doesn’t actually exist). It’ll only do a single file though.
Ah thanks! I was thinking of something else actually (how to use bashrc aliases when doing :! inside of vim), but it’s good to know that there are ways to using a vim config on other machines like that
I don’t love the idea of essentially giving Microsoft (or any random corporation) root access to my machine. I guess you could review it every time you download it, though.
If you’re allowed docker in your systems, build a sysadmin container with all your favorite tools. Then just run it locally and remotely with the root directory bound to /mnt or something
Same here, I even don’t have
ll
in my vocabulary, although it seems to be a default on Debian based systems.As an Ubuntu user who has to use SLES for work, this is a huge pain point for me. I literally have a script to add ll to my .bashrc. My team develops a product that is built on top of SLES, so I need to deploy fresh SLES-based VMs daily. Sometimes up to 5 times a day. My script adds my public key for ssh, enables ssh port forwarding and debugging, some aliases that I rely on too much, a few programs that I regularly need etc.
That’s exactly the thing. I limit my configuration to basic environment variables and define sudoers using LDAP (sss). This way I can have some preferred defaults for some tools, but I don’t configure many aliases.
If I really need it I package (deb, rpm…) and deploy it either as a profile file or script/program properly.
Using a big well configured bashrc/zshrc/… is more trouble than it’s worth for administrators, because it doesn’t transfer between environments easily and increases the mental load by a lot. Even though the idea itself is good.
Aliasing “cp” to “cp --reflink=auto” is a dangerous game. It saves you a ton of time and space if you’re using a CoW filesystem, but you can easily fill up a non-CoW filesystem without realizing.
Only when needed. And I don’t really use aliases at all for basic things like that. I don’t like things like cp or mv being verbose most of the time as I don’t have much use for their output and like it when commands are quite unless there is a problem as it makes it easier to see the error lines. I don’t really want to be confirming every file in a recursive rm or cp or mv either - just leads to hitting y automatically and not thinking about it. I do like how zsh warns you about
rm *
though with the directory you are about to remove though.That’s fair.
I don’t really want to be confirming every file in a recursive rm or cp or mv either
Ah but
rm
will only make you confirm if there are more than 3 files to be removed. Andcp
andmv
only if there’s risk of overwriting. And it’s only one confirm per command, not per file.You must have a different version of rm then I do:
$ rm -i a rm: remove regular empty file 'a'? y $ rm -ir f rm: descend into directory 'f'? y rm: remove regular empty file 'f/a'? y rm: remove regular empty file 'f/b'? y rm: remove regular empty file 'f/c'? y rm: remove regular empty file 'f/d'? y rm: remove regular empty file 'f/e'? y rm: remove directory 'f'? y
Never really had an issue with cp or mv overwriting files where I didnt mean to. Possibly because I don’t default recursive on all the time. So they tend to error when accidentality copying folders. And I tend to tab complete paths to see what is in them. I also tend to use rsync for copying large directories rather than just cp when the directories might not be empty to start with.
This should work:
$ rm -Ir f
Be careful, as this can easily break many scripts.
shouldn’t be a problem because scripts are run non-interactively and my
.bashrc
wouldn’t be read, right?Thats actually good to know, thanks
See replies to this comment.
That’s not necessarily the case. Most scripts will just run in the current environment (meaning your .bashrc will be used) and not define many/any arguments for commands like cp or rm.Your .bashrc file is read whenever you start a command line shell, so by the time you can even run a script you probably already invoked your aliases.Exceptions would be if you’re running a script from cron or running the script from another shell like sh or zsh.
I definitely type
cp -i
andmv -i
by habit, and I have a~/trash
directory that I often move stuff into instead of deleting it and then periodically blow away completely. I agree with some other commenters that all the stuff in.bash_profile
is maybe not a good idea, but I find that just doing those three things creates a lot more safety than just the “you said to remove it so now 5ms later it’s gone forever” Unix default.I think I’d increase verbosity a lot more if the output overwrote the current line rather than filling up my scrollback buffer. But I exclusively use tmux to ssh into my server and resume the same session for days or weeks at different times and from different devices. So I need to read back what I was doing so I can pick up where I left off.
Output also gets in the way of job control, because all background jobs share the same output: your terminal. Side note, if your command doesn’t output anything, slap a " &" on the end of that bad boy so you can do other things while it works. You can && commands together before the & for jobs that should run after each other, or you can prepend "wait && " to only execute the next commands once all background jobs finish.
For deleting specifically I use
gio trash
, it should work on any gnome install.It is a very bad idea to use aliases and functions modifying behavior of standard commands. Once you will type some command on a machine where your aliases are not defined and get unexpected results, probably losing your data in the worst case. So
la
forls -a
is fine, butrm
forrm -r
is not. Choose unique names for your aliases.