

I have to use Chrome to access a couple of sites that don’t play nice with Firefox.
I bet those sites will play nice if you switch your user-agent to display as chrome.
I have to use Chrome to access a couple of sites that don’t play nice with Firefox.
I bet those sites will play nice if you switch your user-agent to display as chrome.
How is the art a positive?
While true, they still collect data on the results hosting your own instance can prevent you from hitting rate-limits as often.
- SearxNG (Google Privacy frontend)
SearXNG is more than just a front end for google search, it’s an aggregator, if configured properly can collect results from Bing, Startpage, Wikipedia, DuckDuckGo, Brave.
Yes, back up your stuff regularly, don’t be like me and break your partition table with a 4 month gap between backups. Accomplishing 4 months of work in 5 hours is not fun.
So why would you not write out the full path?
The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my compose
folder and reinstalled Rasp Pi OS under a different username than my last install.
If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.
As other stated it’s not a bad way of managing volumes. In my scenario I store all volumes in a /config
folder.
For example on my SearXNG instance I have a volume like such:
services:
searxng:
…
volumes:
- ./config/searx:/etc/searxng:rw
This makes the files for SearXNG two folders away. I also store these in the /home/YourUser
directory so docker avoids using sudoers access whenever possible.
Would this be the Gif killer? If PNG can contain a relatively similar frame count & time limit but with marginally better image quality it just may.
Grandma probably doesn’t do the actually torrenting herself, chances are OP has a overseerr or jellyseerr type of setup, grandma makes the request and things just flow.
Done did their final sudo docker compose down
“Technically” my jellyfin is exposed to the internet however, I have Fail2Ban setup blocking every public IP and only whitelisting IP’s that I’ve verified.
I use GeoBlock for the services I want exposed to the internet however, I should also setup Authelia or something along those lines for further verification.
Reverse proxy is Traefik.
If you aren’t already familiarized with the Docker Engine - you can use Play With Docker to fiddle around, spin up a container or two using the docker run
command, once you get comfortable with the command structure you can move into Docker Compose which makes handling multiple containers easy using .yml
files.
Once you’re comfortable with compose I suggest working into Reverse Proxying with something like SWAG or Traefik which let you put an domain behind the IP, ssl certificates and offer plugins that give you more control on how requests are handled.
There really is no “guide for dummies” here, you’ve got to rely on the documentation provided by these services.
Me: Siri turn lights on
Siri: Now playing Bon Jovi’s “Wanted dead or alive”
Me: Siri shut the fuck up, you have one job, do that job
This is the Firefox extension I use, I would check the headers your browser passes with WhoAmI to verify your user-agent, alternatively you can use invidious to get around YouTube’s bullshit.
I host a public Invidious instance for folks with a Canadian IP - https://inv.halstead.host/