This is for me so i am not going to dance about the instructions here.
Docker Compose
nginx:
container_name: nginx
tty: true
restart: always
environment:
TZ: Australia/Hobart
image: uozi/nginx-ui
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- /etc/ssl/certs/:/etc/ssl/certs/
- /etc/ssl/private/:/etc/ssl/private/
- './nginx:/etc/nginx'
- './nginx-ui:/etc/nginx-ui'
ports:
- "80:80"
- "443:443"
Nginx Template
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name pihole.local;
ssl_certificate /etc/ssl/certs/selfsigned.crt;
ssl_certificate_key /etc/ssl/private/selfsigned.key;
location / {
proxy_pass http://pihole:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name pihole.local;
return 301 https://$host$request_uri;
}
Now we have that crap out of the way, make sure its launched and working. Everything will be unsecured. So lets fix that
sudo openssl genrsa -out /etc/ssl/private/rootCA.key 2048
sudo openssl req -x509 -new -nodes -key /etc/ssl/private/rootCA.key -sha256 -days 1024 -out /etc/ssl/certs/rootCA.crt -subj "/C=US/ST=California/L=San Francisco/O=MyOrganization/OU=IT Department/CN=MyRootCA"
sudo nano /etc/ssl/private/openssl-san.cnf
The contents for the new conf file:
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
CN = your_primary_domain.local
[ v3_req ]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = your_primary_domain.local
DNS.2 = pihole.local
DNS.3 = anotherdomain.local
Primary domain can be the one hosting nginx. So nginx.local
You cannot alter this later so try to anticipate all your domains. Its not the end of the world if you cant, just run these instructions again.
sudo openssl genrsa -out /etc/ssl/private/selfsigned.key 2048
sudo openssl req -new -key /etc/ssl/private/selfsigned.key -out /etc/ssl/private/selfsigned.csr -config /etc/ssl/private/openssl-san.cnf
sudo openssl x509 -req -in /etc/ssl/private/selfsigned.csr -CA /etc/ssl/certs/rootCA.crt -CAkey /etc/ssl/private/rootCA.key -CAcreateserial -out /etc/ssl/certs/selfsigned.crt -days 500 -sha256 -extfile /etc/ssl/private/openssl-san.cnf -extensions v3_req
Thats pretty much it on the server end. You just need to add new nginx configs for each service.
Now, I cant yet speak for chrome, but to get firefox to stop being weird:
-
Download rootCA.crt.
-
Go into Settings > Privacy... > Veiw cert... > Authorities > Import
-
Import the file. Check the website box.
That should make firefox play nice with all the domains in the cert.
That's it! Get on with it.

In today's rapidly advancing world, the threat of an apocalypse looms large, with potential disasters stemming from multiple fronts: genetic engineering, pandemics, nuclear conflict, and artificial intelligence (AI) gone rogue. Each of these vectors presents a formidable challenge, demanding sophisticated solutions that could arguably be beyond human capacity alone. This is where the pursuit of Artificial General Intelligence (AGI) comes into play, promising not just advancements but perhaps survival itself.
Unpacking the Threats
CRISPR and Genetic Engineering: CRISPR technology has handed humanity the genetic scissors to edit life's blueprint. However, this powerful tool comes with the potential for unintended consequences, including the creation of new pathogens or irreversible changes to the human genome. The complexity of biological ecosystems and the high stakes of gene editing call for oversight that could one day be enhanced by AGI's computational power and predictive modeling.
Virus Manufacture and Biological Threats: The manufacture of viruses, whether for research or as biological weapons, presents a clear existential threat. Current biosecurity measures may not be foolproof in a world where technology is accessible and expertise widespread. AGI could help by designing more effective containment strategies, predicting outbreak patterns, and speeding up vaccine development through rapid simulation and testing.
Nuclear War: The perennial specter of nuclear war continues to cast a long shadow over global security. AGI could potentially manage disarmament processes, monitor compliance with international treaties, and even control nuclear arsenals with a level of impartiality and precision unattainable to humans.
AI Armageddon: Ironically, the very pursuit of AI could itself precipitate an apocalypse if control over superintelligent systems is lost. Developing AGI might seem like fighting fire with fire, but with proper safeguards, it could actually enforce stringent controls over lesser AI forms and prevent them from evolving unchecked.
Expanding Control and Developing Defenses: The Dual Pathways to Mitigation
Control Through International Cooperation: History shows us that control agreements can be effective. Just as the world has seen with chemical weapons and, to a lesser extent, nuclear weapons, international treaties can mitigate risks. The principle of mutually assured destruction has helped prevent nuclear wars so far, but it's a precarious balance. The constant threat of accidents or the actions of rogue leaders looms large, making this control only a partial solution. AGI could play a critical role by enhancing treaty verification processes, ensuring compliance, and managing de-escalation protocols during crises.
Advancing Defensive Technologies: The second approach to mitigating these apocalyptic threats is through technological advancements that counteract the risks. Just as rapid development of counter-viruses could neutralize biothreats, there needs to be a similar pace in creating defenses against nuclear weapons. For over sixty years, the world has lacked a reliable method to prevent nuclear attacks effectively. AGI could change this by accelerating the development of defensive strategies that are beyond current human capabilities.
AGI�s Role in Rational Decision-Making and Crisis Management
Imagine a scenario where a nuclear crisis is imminent. Here, AGI could provide highly rational, unbiased advice for decision-makers, potentially guiding humanity away from catastrophic outcomes. Furthermore, AGI could be tasked with developing systems capable of neutralizing threats in real-time, such as intercepting ballistic missiles or even safely redirecting them into space. This level of intervention would require an AGI with capabilities far surpassing anything currently available�an entity that combines deep knowledge of technology, human psychology, and strategic defense.
Conclusion
As we stand on the brink of potential global catastrophes, the imperative to develop artificial general intelligence has never been clearer or more urgent. AGI holds the promise of solving problems that are currently beyond human reach, acting as a guardian of humanity's future. By harnessing this potential responsibly, we could secure a safer, more resilient world for future generations.
I'm not going to mess you around here, this was oddly painful with conflicting information and given I know nothing about the implementation details of SSH, quite irritating:
Follow these steps to enable two-factor authentication (2FA) over SSH using a public key and Google Authenticator on Ubuntu 22.04.4 LTS:
- Update your package lists
- Install the Google Authenticator package:
sudo apt-get install libpam-google-authenticator
- Set up Google Authenticator:
-
Answer the prompts as follows:
-
Do you want authentication tokens to be time-based (y/n)? y
-
Do you want me to update your "~/.google_authenticator" file (y/n)? y
-
Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n). y
-
By default, tokens are good for 30 seconds and in order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of 1:30min to about 4min. Do you want to do so (y/n)? n
-
If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. Do you want to enable rate-limiting (y/n)? y
-
Scan the QR code into your authenticator app.
-
Edit the PAM SSHD configuration:
sudo nano /etc/pam.d/sshd
-
Add these lines at the bottom of the file
-
auth required pam_google_authenticator.so nullok
auth required pam_permit.so
-
nullok
allows users to log in without 2FA until they configure their OATH-TOTP token. Remove this option once all users are set up.
Configure SSH for challenge-response authentication:
sudo nano /etc/ssh/sshd_config
- Set
ChallengeResponseAuthentication
to yes
. Update if present, uncomment, or add this line.
Restart the SSH service to apply changes:
sudo systemctl restart sshd.service
Test your configuration in a separate terminal window. If you already use a public key, there should be no noticeable change.
Update SSHD to require 2FA:
sudo nano /etc/ssh/sshd_config
- Add or update the following line to require public key and either password or keyboard-interactive authentication:r
AuthenticationMethods publickey,password publickey,keyboard-interactive
Enable keyboard-interactive authentication:
KbdInteractiveAuthentication yes
<-- This is for 22.04 LTS
ChallengeResponseAuthentication yes
<-- This is for older versions of Linux
Further secure PAM by editing its SSHD file:
sudo nano /etc/pam.d/sshd
- Comment out this line to prevent fallback to password authentication.
#@include common-auth
Restart the SSH service once more to finalize all settings:
sudo systemctl restart sshd.service
These steps will enable you to securely access your server using two-factor authentication while maintaining the flexibility of public key authentication.