justinlillico.com

Apr 17, 2024

Enabling 2FA on SSH Using Google Authenticator for Ubuntu 22.04.4 LTS

I'm not going to mess you around here, this was oddly painful with conflicting information and given I know nothing about the implementation details of SSH, quite irritating:

Follow these steps to enable two-factor authentication (2FA) over SSH using a public key and Google Authenticator on Ubuntu 22.04.4 LTS:

  1. Update your package lists
sudo apt-get update
  1. Install the Google Authenticator package:
sudo apt-get install libpam-google-authenticator
  1. Set up Google Authenticator:
google-authenticator
  1. Answer the prompts as follows:

  2. Do you want authentication tokens to be time-based (y/n)? y

  3. Do you want me to update your "~/.google_authenticator" file (y/n)? y

  4. Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n). y

  5. By default, tokens are good for 30 seconds and in order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of 1:30min to about 4min. Do you want to do so (y/n)? n

  6. If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. Do you want to enable rate-limiting (y/n)? y

  7. Scan the QR code into your authenticator app.

  8. Edit the PAM SSHD configuration:

sudo nano /etc/pam.d/sshd
  • Add these lines at the bottom of the file

  • auth required pam_google_authenticator.so nullok
    auth required pam_permit.so

  • nullok allows users to log in without 2FA until they configure their OATH-TOTP token. Remove this option once all users are set up.

Configure SSH for challenge-response authentication:

sudo nano /etc/ssh/sshd_config
  • Set ChallengeResponseAuthentication to yes. Update if present, uncomment, or add this line.

Restart the SSH service to apply changes:

sudo systemctl restart sshd.service
  • If this step causes issues, debug with:bash

  • sudo sshd -t -f /etc/ssh/sshd_config

Test your configuration in a separate terminal window. If you already use a public key, there should be no noticeable change.

Update SSHD to require 2FA:

sudo nano /etc/ssh/sshd_config
  • Add or update the following line to require public key and either password or keyboard-interactive authentication:r
AuthenticationMethods publickey,password publickey,keyboard-interactive

Enable keyboard-interactive authentication:

KbdInteractiveAuthentication yes <-- This is for 22.04 LTS

ChallengeResponseAuthentication yes <-- This is for older versions of Linux

Further secure PAM by editing its SSHD file:

sudo nano /etc/pam.d/sshd
  • Comment out this line to prevent fallback to password authentication.
    #@include common-auth

Restart the SSH service once more to finalize all settings:

sudo systemctl restart sshd.service

These steps will enable you to securely access your server using two-factor authentication while maintaining the flexibility of public key authentication.

Feb 19, 2024

Doom's AI Legacy

In the early 1990s, a groundbreaking video game called Doom not only redefined the landscape of gaming but also sparked a vision for the future of technology. While Doom itself did not utilize GPUs — since the technology was still in its infancy — it set a precedent that better graphics could lead to more immersive experiences and, consequently, greater sales. This realization fueled a desire among gamers and developers alike for more visually complex games, setting the stage for successors like Quake to fully harness the power of GPU hardware.

This push for enhanced graphics in gaming indirectly catalyzed the development of GPUs, which are now instrumental in powering the algorithms behind modern artificial intelligence (AI) and large language models (LLMs). The technological advancements spurred by the gaming industry's quest for better graphics have thus laid the groundwork for the computational capabilities essential to today's AI research and applications.

However, as we embark on this journey into AI's potential, it is crucial to proceed with both optimism and caution. While AI promises to revolutionize our approach to global challenges such as climate change and healthcare, it also raises ethical concerns and safety issues. The complexity and potential biases within AI systems, as well as their impact on society, necessitate a balanced and informed approach to their development and deployment.

Despite these challenges, the story of Doom and its indirect contribution to the rise of GPUs and AI highlights an essential truth about innovation: it often occurs in unpredictable ways. Just as the gaming industry's demand for better graphics unexpectedly contributed to the AI revolution, today's AI technologies could provide us with unprecedented tools to address the pressing issues of our time.

The implications of technological cross-pollination are vast. LLMs and AI applications are already making significant strides in fields such as materials science and scientific research, showcasing the transformative potential of AI.

In light of Doom's legacy, the message is clear: while embracing the possibilities AI offers, we must navigate this new terrain with care and responsibility. Demystifying AI and promoting a culture of ethical innovation can help us leverage these technologies to their fullest potential without falling prey to their risks.

Doom's enduring impact extends beyond the realm of gaming, indirectly influencing the development of technologies that underpin the current AI landscape. As we explore AI's role in shaping our future, our decisions today will determine whether we can harness this potential to overcome our greatest challenges or whether we will be overwhelmed by the very technologies we hoped would save us. The creativity and exploratory spirit that Doom inspired could very well be the key to our collective salvation.

Jan 11, 2024

Redefining Human-Computer Interaction: The Revolutionary Role of GPTs

In the vast expanse of technological evolution, the way humans interact with computers has been a constant study in innovation. From the clunky keyboards of the early computing era to the sleek touchscreens of today, each step has been a leap towards greater efficiency and intuitiveness. Today, we stand at the cusp of another monumental shift, heralded by the advent of Generative Pre-trained Transformers (GPTs). These are not mere tools; they are harbingers of a future where our interactions with computers become more natural and human-like than ever before.

For over six decades, since the inception of the modern computer mouse, our primary interaction with computers has been through graphical user interfaces. This longstanding reliance on keyboards and mice highlights a stagnant era in human-computer interaction. However, GPTs promise a seismic shift from these traditional interfaces. They offer a more intuitive, conversational, and context-aware interaction, akin to speaking with a knowledgeable assistant rather than inputting commands through clicks and keystrokes.

Unlike traditional search engines that rely on keyword-based queries, GPTs understand and process natural language, providing contextually relevant and conversational responses. This nuanced understanding of human language and intent marks a significant departure from the impersonal, list-based outputs of search engines. For developers, this opens a new realm of possibilities for creating user interfaces that are more aligned with natural human communication.

In this new era, developers are no longer just coding for functionality within the constraints of a graphical interface. Instead, they are designing experiences that are more akin to human-to-human interaction. This shift requires a new set of skills focused on natural language understanding and AI-driven design, pushing the boundaries of what's possible in software development.

Imagine booking a holiday or running a business automation through a simple conversation with your computer. GPTs make this possible. They can interpret your requirements, ask relevant follow-up questions, and execute tasks with a level of ease and understanding that traditional interfaces cannot match. This capability is set to revolutionize how we perform a myriad of daily tasks, making technology more accessible and efficient.

Looking ahead, GPTs are poised to become the heart of computer systems, translating user intent into actionable commands. This goes beyond text-based interaction; imagine a future where a camera interprets your hand gestures, or a microphone picks up your spoken words, and a GPT translates these into digital commands. This leap forward in human-computer interaction is not just about convenience; it's about augmenting human capabilities and freeing up our time and mental resources for more creative and meaningful pursuits.

The arrival of GPTs marks a new chapter in the story of human-computer interaction. As we move away from the confines of graphical user interfaces and towards a more natural, conversational mode of interaction, we unlock a world of possibilities. It's a journey from interacting with a machine to conversing with an intelligence that understands us. The potential of GPTs to transform our digital lives is immense, and the time to embrace this change is now.

← Previous Next → Page 6 of 12